Security Considerations with Forefront Edge Virtual Deployments
Last revision: April 2010
Jim Harrison, Program Manager, ISA SE
Gershon Levitz, Program Manager, Forefront Edge
Yuri Diogenes; CSS Security Support Engineer
Mohit Saxena; CSS Security Technical Lead
This article has been updated to include Microsoft Forefront Intelligent Access Gateway (IAG) 2007 Service Pack 2 (SP2) and Unified Access Gateway (UAG) 2010. References to individual products have been generalized into “Forefront Edge products”, except where each product presents unique concerns or benefits. Otherwise, this article is fundamentally unchanged from its original form.
Virtualization of server workloads has become an increasingly popular method for making more efficient use of computer hardware and the supporting infrastructure. Virtualization provides many advantages to the data center administrator, while necessarily changing the way they create and manage their deployments. Server application virtualization is a more difficult undertaking due to the complexity of properly allocating the hardware across multiple server workloads. Combining applications which cannot coexist on a single machine across multiple child partitions within the same host presents unique sizing and security challenges as well. Likewise, resulting network virtualization and the potential for multiple simultaneous server failures when the parent partition fails presents unique security and availability problems.
This article provides specific guidelines for deploying Forefront Edge products within hardware virtualization. We strongly recommend that you familiarize yourself with the deployment and best practices documents provided in the References section.
Supported Virtual Environments
Forefront Edge products are supported on hardware virtualization in accordance with the following programs:
For example, if a hardware virtualization platform is listed as ”validated” with the SVVP (not “under evaluation”), Forefront Edge products will be supported for production usage on that platform, within the limits prescribed in the Microsoft Product Support Lifecycle, Non-Microsoft hardware virtualization policies, and the system requirements applicable to that product version and edition.
For hardware virtualization platforms not listed with the SVVP, Forefront Edge products are supported in accordance with remaining Microsoft support policies, limited as follows:
Desktop virtualization, such as Microsoft Virtual PC or similar 3rd-party product—Supported for demonstration and educational use only.
Server Virtualization, such as Microsoft Virtual Server or similar 3rd-party product—Supported, but not recommended for production use.
As stated in Microsoft Knowledge Base article 897615 (http://support.microsoft.com/kb/897615), Microsoft support engineers may request that a customer reproduce a reported problem on real hardware (or within an SVVP-listed hardware virtualization platform) before continuing with the case. If the problem cannot be reproduced in hardware or on SVVP-listed server virtualization product of similar class, the case may be deferred to the 3rd-party product support.
The primary deployment criteria for any edge protection deployment must be security, stability and performance. Defining the priority of each of these is a task that must incorporate deep analysis of the organization’s line-of-business (LOB) application requirements, general and network security needs as well as any regulatory compliance. Although it is not possible to address all possible scenarios, this whitepaper will outline the critical points for the most common deployments.
It is an inalterable fact that due to resource sharing among virtual machines, a server application operating on dedicated hardware will perform better than the same application operating in a virtualized environment of near-identical characteristics (same number of the same class CPU, same memory, etc.). For instance, a traffic processing load that would bring a hardware-based ISA to 80% CPU may well produce a denial-of-service (DoS) state for a similarly-configured virtual ISA Server, or where the virtual parent is simply overtasked due to the total child partition workload and resource bottlenecks created by sibling child partitions.
What this means to the engineer who is tasked with virtualizing their server applications is that in many cases, the resource allocations for a particular application may have to be expanded to account for the resource sharing incurred in a virtual machine. Exactly how much expansion is required can only be determined through testing on the planned virtual deployment.
Likewise, data center management processes must be re-examined and redefined to accommodate the problems that will arise when human, software or hardware error causes the loss of multiple child partitions within a parent partition. This condition not only represents a more significant business operational impact, but also a potential security issue if the ISA Server or Forefront TMG represents one of the now-unavailable child partitions.
Define the traffic profile
Although this process is no different for virtual vs. hardware deployments, it is much more important since the resource requirements of one child partition will impact those available to other partitions operating on the virtual host; especially the parent partition.
Until the traffic profile aspect of the deployment plan is clearly understood, the performance and security requirements for Forefront Edge products cannot be accurately determined, evaluated, or satisfied. Microsoft Knowledge Base article 832017 (http://support.microsoft.com/kb/832017) defines the traffic profiles for most Windows-based or Microsoft-built applications including ISA Server and Forefront TMG, but it does not define the traffic profile for non-Microsoft products. Certain assumptions are safely made in many cases; a mail server will use common mail protocols, such as SMTP, POP3, IMAP or even HTTP(s) if it provides webmail services. If you intend to use a Forefront Edge product to control custom application traffic, you may need to seek this information from the product vendor. In some cases, you may need to experiment with the application and use a network analysis tool to sort this out.
Once the traffic profile has been determined, the next step is to determine traffic load in the context of each application or service. This step is also critical in order to accurately predict its impact on Forefront Edge product performance and the overall network capacity. You may need to perform some traffic flow analysis in your current deployment to understand the present traffic load and thus predict how this will change as your organization and its traffic needs evolve.
Note the following best practices:
Where possible, use ISA Server or Forefront TMG to control traffic in the virtual networks—This will help you control traffic between networks and detect attacks from local and remote hosts, virtual and physical
Avoid the use of “allow all” rules—If your application vendor cannot clearly define the traffic profile for you, some time spent with your favorite network capture tool can be of use.
Restrict RPC and DCOM to specific ports—By default, RPC and DCOM will use whatever ephemeral ports are available when the related server application starts up and request connections or sockets. By limiting the range of ports available to them, you can also limit your acceptable traffic profile.
Define the Security Boundaries
There are multiple aspects to this process, and while there are a few basic rules, you must involve all of the security, networking, application and regulatory managers in the decision process. It may well be that for all your technical determination and analysis; a regulatory compliance requirement can prevent you from deploying application X as a child partition alongside application Y on a child partition on the same virtual host.
You should avoid mixing virtual applications or servers of differing security contexts within a single parent partition; especially when one or more of them face the network edge. Protecting your Exchange server becomes much more difficult when the adjacent child partitions or (worse yet) the parent partition is hosting a game server. This is another place where ISA Server or Forefront TMG can offer protection between hosts. Because child partitions on separate parents are effectively on separate networks, you can potentially use ISA Server or Forefront TMG to isolate those applications and achieve greater overall security than if they were deployed on dedicated hardware.
Note the following best practices:
Install Windows Server 2008 Core on the parent—This limits the attack surface and patching requirements to the bare minimum. Since Windows 2008 Core does not support applications which rely on Windows UI mechanisms, this will help prevent installation of non-essential applications on the Parent partition
Each child partition on a specific Parent partition should be of near-identical security—For instance, the Exchange and SharePoint child partitions that user access from the Internet should meet the same security and access requirements as much as possible. You cannot satisfy this if you deploy your Exchange and SharePoint servers and game servers as child partitions on the same parent partition.
The parent partition must be up-to-date on patches—A vulnerability of the parent translates to a potential vulnerability on each and every guest it hosts.
Each child partition must be up to date on patches—While an unpatched child partition not generally as threatening as an unpatched parent partition, if a compromised child partition has access to the parent partition, it may be able to mount an attack on the parent and thus poses a potential threat to all guests; regardless of their vulnerability to that particular threat or their network proximity to the compromised child partition.
Do not use the parent partition as a workstation—The fewer applications that are installed and running on the parent, the smaller the attack surface it presents. If you install Windows Server 2008 Core on the parent, this threat is much better mitigated.
Restrict access and management of the parent partition—As detailed later, the accounts with management access to the parent partition effectively have full control over any and all child partitions.
Use a TPM-based parent partition with BitLocker—The deeper you can enforce access controls to the parent partition, the better protection you afford the child partitions.
Of particular interest in the virtual environment is the question of managing traffic flow for the child partitions, parent partition and the physical network. If a guest has direct access to any physical network, it potentially presents a greater threat to its sibling child partitions and parent partition than if it were forced to pass through a traffic control such as an ISA Server or Forefront TMG server. While defining a network which imposes such traffic controls is a critical part of the network design, management control of this network is even more critical.
Routing traffic around an ISAServer or Forefront TMG guest presents a state where it is not able to provide any security for the network whatsoever simply by virtue of having been effectively removed from the traffic path. While this case seems to be no different than a mis-patched network cable in the data center, you must consider that there will be no obvious visual indicators for misrouted virtual networking as there might be with a network cable plugged into the wrong port on a physical patch panel or switch. This point will make identifying these problems correspondingly more difficult and time consuming, effectively making problem resolution that much more costly. The best way to prevent such occurrences is to define and enforce very clear data center change control policies and system monitoring and reporting systems.
Note the following best practices:
Avoid connecting the Parent partition to the Internet without additional protection—While Windows Server 2008 Filtering Platform provides a much stronger host firewall than previous Windows releases, network security best practices dictates that you should layer your network security. You can accomplish this by using an external layer-3 filtering device between the parent connection and the Internet. ISA Server or Forefront TMG on a separate physical host works well for this purpose.
Avoid connecting the parent partition to any virtual network unless absolutely necessary—Because the parent partition is the key to keeping the child partitions alive and well and because the parent partition is likely to use at least one physical network, the fewer points of entry you provide to the parent partition from a child partition, the better. For instance, Hyper-V “Local” virtual networks are invisible to the parent partition and so are good choices for use as isolated perimeter networks usable only by connected child partitions.
Avoid sharing the same Internet virtual switch connection between multiple guests—You cannot ensure traffic security for your network if your game server child partition is sharing the Internet connection with the ISA Server or Forefront TMG child partition. Better that any child partition which needs Internet access should access it through the ISA Server or Forefront TMG child partition.
Avoid combining your perimeter network segments on a single Parent partition—In any deployment, the use of perimeter networks is intended to create security boundaries between networks of differing trusts. By placing all of these machines and networks on the same parent partition, you may inadvertently bridge these security boundaries through one or more parent partition virtual network connections or by misassignment of a server to the wrong virtual network.
Avoid collapsing your perimeter network design to simplify the virtual network design—Your perimeter network design was created to satisfy the requirements imposed on you by multiple sources. It’s highly unlikely that if the design cannot be collapsed in hardware that it can be collapsed in virtual networks.
The parent partition
Regardless of whether the VM deployment is edge- or internally-placed, the parent partition is the most important and therefore the most critical machine among them. If the parent partition is compromised or fails, all of its child partitions are threatened.
Note the following best practices:
Use hardware that passes Windows Hardware Quality Labs as “certified for”:
Windows Server 2008. If you expect to have server-class functionality and reliability, you cannot hope to achieve that using home-computer class system hardware or drivers. An investment in devices and related drivers that were designed and tested to experience server-class workloads will go a long way toward keeping your virtual deployments on-line under heavy loads. In particular, while it’s generally true that drivers written for Windows Vista will “work” on Windows Server 2008, the odds are that they will not stand up to the heavier workload presented by server applications or virtualization.
Hyper-V. By limiting your choices to hardware which satisfies WHQL testing specifically targeted at Microsoft Hypervisor, you provide a better chance that your virtual deployment will behave properly. Many hardware vendors are working closely with all server virtualization vendors to validate their offerings for one or more server virtualization platforms.
Keep the system drivers current—The single most common cause of server network problems is the system drivers themselves; most commonly – the network drivers. When these need to work closely with other high-performing drivers such as those found in today’s virtualization solutions, the performance and stability of the system drivers is even more important. While it may not always be possible especially in test environments, you should consider limiting your production deployments to signed drivers only.
Use Windows Server 2008 Core for the parent partition—This provides the smallest possible attack surface of any Windows Server deployment option, while simultaneously restricting the user’s ability to weaken this security posture.
Disable any externally-facing NICs for the parent partition—After you have created an “external” virtual switch for use by child partitions, you should disable the related virtual NIC in the parent to prevent access to the parent from the Internet.
If you cannot disable “external” virtual switches for the parent, unbind all L3+ protocols and enable WFP for those NICs—By unbinding protocols and settings a heavily-restrictive policy in WFP, a host that cannot communicate using a protocol on which an attack depends is not vulnerable to the attack from a network at which the protocol is unbound and filtered. In other words, “if I can’t hear you, you can’t bother me”.
If the previous steps cannot be employed to protect the parent, use an external layer-2+ firewall—There should be no reason to make the parent accessible to Internet-based attacks. If you find yourself considering such a deployment, you should re-evaluate your planning.
Use a dedicated, Out-Of-Band (OOB) network connection to provide management connectivity to the parent, as follows:
Dedicated connection—By providing a network connection that is unrelated to any virtual network, the parent will remain available even if the virtual networking mechanisms should fail.
OOB connection—By separating the parent management from the guest networking, you can effectively isolate the parent from the network where application-based attacks would be seen.
Use TPM-supported hardware and Bitlocker on Windows Server 2008 to control access to the Parent partition and protect Child partition disks and definition files from unauthorized access—Server theft is a reality that must be considered in any deployment and the ability to acquire multiple servers in a single box can only make this even more attractive to would-be thieves. By placing all of your guests on a Bitlocker-protected disk, you effectively hide your servers from would-be thieves.
Parent and guest connections
You must balance the requirements of your virtual networks with the security needs of the whole environment. For instance, a single virtual network for each partition connection associated with a single NIC offers better off-host network performance than does a physical connection shared by multiple partitions through a single virtual switch. If the child partition imposes a comparatively light network requirement, then it may be a candidate for sharing a virtual network with other child partitions.
See Network security v performance for detailed discussion regarding various network definitions and the benefits and problems associated with each.
Child partition performance considerations
You’ll need to obtain a performance “footprint” before defining the virtual machine resources. To accomplish this, you must gather performance data for an extended period (at least two weeks) using performance best practices information for Forefront Edge products so that you can obtain a statistical model for the machine resources used. Once you’ve accomplished this, you’ll have a reasonable idea of the minimum machine resources that you’ll need to provide for the virtual server.
The next step after defining the virtual machine requirements is to build a test environment where you can deploy and test the workload and traffic load combination you intend to use in production. Only through pre-testing can you determine how to best distribute the resources among the child partitions.
CPU and RAM considerations
Any server workload which functions at a given level on hardware of a specific configuration will perform less well on the same hardware configuration when the machine resources are shared with multiple workloads. This is true whether the workloads are combined on a single operating system instance or when the workloads are shared among multiple virtual machines. In fact, the resource requirements of managing multiple workloads is increased when those workloads are also associated with individual operating system instances. For this reason, you should familiarize yourself with the performance best practices recommendations of the virtualization technology you deploy. Although the virtualization functionality offered by each vendor are similar for a given virtualization class (desktop, server, data center), the implementations of those functionality may produce varying results for a given server workload or workload combinations as well.
Note the following best practices:
Avoid combining high-resource child partitions on the same parent—Forefront Edge products can be a significant resource consumer, depending on the traffic profile and any 3rd-party add-ons you may use. If you have multiple high-resource server workloads competing for the same resources, the performance of all workloads may be significantly degraded and may present a denial-of-service as well.
Give Forefront edge products as much CPU and memory as possible—Because they must share resources with other child partitions, the more memory and CPU you provide, the better they can perform in a virtual machine.
Neither ISA Server, IAG 2007, nor 3rd-party add-ons that run within ISA Server benefit from more than 4 CPU or greater than 4 GB RAM. Forefront TMG and Forefront UAG impose no such limitations.
Use a virtualization technology which is up to the workload task—If your traffic profile requires network performance at or greater than 1 GB per second, using a hardware virtualization product which provides a maximum of 100 MB per second performance will result in an underperforming and overtaxed server.
Because the default logging mechanism for ISA Server, Forefront TMG, and Forefront UAG use a local SQL service instance (MSDE 2000, SQL Express 2005, or SQL Express 2008), the logging requirements for a heavily-utilized server can be quite intense. For instance, the egress proxies managed by Microsoft IT (MSIT) produce over 30 GB per log instance per server per day. If the current Forefront Edge deployment uses MSDE/SQLE, then the performance footprint you obtain must account for this fact. Best practices for logging in ISA Server 2004 (http://technet.microsoft.com/en-us/library/cc302682.aspx) provides some basic performance factors to use when estimating the logging load incurred for a given traffic load. Forefront TMG requirements are discussed in the ISABlog article noted in the References section.
Note the following best practices:
Use separate drives for the child partition OS instance and its logging destination—The temptation to combine these for server definition simplicity must be resisted regardless of the server workload. If all child partitions are sharing the parent partition disk where their respective virtual disks reside, write contention between guests may result in intermittent or extended logging failures. If each guest uses a single VHD for all logical drives, the guest contention only adds to this
Use dedicated drives for logging, reporting, and cache mechanisms—The overhead involved with translating child partition disk activity to the file access on the Parent partition can be significant in high-disk IO workloads. By providing direct-disk or pass-through access, this overhead is reduced significantly and the threat of logging failure and thus traffic failure is reduced accordingly.
By assigning dedicated drives to a child partition, you lose the ability to employ Windows Bitlocker to protect the data stored by the child partition from theft if the parent partition is stolen.
Another aspect of server virtualization is the effect of mixing multiple high-traffic services on a single physical network connection at the parent partition. Even if the remaining host resources are distributed appropriately among the child partitions, if they are all network-heavy applications (such as Web sites, mail servers and Forefront Edge products), and they are expected to serve non-local virtual and real clients, then it may well be beneficial to provide each child partition its own interface to the real network. This will complicate the virtual network model and the management processes you define for your virtual deployments, but the performance and security of your virtual services will be improved.
Note the following best practices:
Keep parent partition NIC drivers current. Where practicable, use only signed drivers.
Use application performance test tools (Exchange / SharePoint, IIS, etc.) to validate network performance in the lab before deploying in production
Assign a physical NIC to each guest OS whenever possible
MS Loopback is *not* a high-performance interface
Parent access considerations
Note the following best practices:
Keep all child partitions and parent partitions up-to-date on patches—Most server workloads provide event logging of varying degrees, but these are only as useful as the amount of time spent monitoring them. An ignored security event log that is filling with logon failures is a great tool for an account-mining attacker, but of no value whatsoever for the system administrator who ignores it at his peril.
Impose stronger security requirements on the parent partition than for any of its child partitions—For instance, the management accounts that control the VM Exchange server should not have management access to the parent partition. Because the accounts with management access to the parent partition effectively have “more than admin rights” on the guest, access to the parent partition should be heavily restricted. To help prevent accidental outages of multiple child partitions, you should avoid using the same management accounts at the parent partition and the child partitions. While this can’t prevent the parent partition management account from causing a denial-of-service for the child partitions, it can minimize the threat of this occurring through the use of an Exchange management account.
Change control considerations
Few things have as much impact on server functionality as undocumented configuration changes. Server administrators rightly expect to find the server in a particular state when they log on and will commonly go about their task as if that state actually exists. All too often a server is rendered unusable because the current activity conflicts with previous actions taken, resulting in unexpected server behavior or at worst; outright server failure.
Note the following best practices:
- Define and enforce change control processes—Only through strict change management can you know the state and functionality of your virtual deployments. Even seemingly small changes can have detrimental effects if these changes are not known when another change is planned or executed.
Application security considerations
As noted previously, your virtual deployments must take relative server security stature into account. Mixing child partitions of dissimilar security stature on a single host is inadvisable as it may violate the precepts of least privilege and least access; especially if the network structure requires that one or more of the dissimilar applications share a common network; and especially with the parent partition. Of course, the depth and breadth of this separation will depend on your specific resources, needs and requirements so no single recommendation is appropriate for all deployments. The following table shows an example of how one environment might define application or service prioritization:
|Application or Service||Security Stature (1-3; 1=highest)|
Edge Security (firewall, IDS)
DNS / WINS
Email / Webmail
HR / Personnel
External Web applications
File & Print
Internal Web applications
You should note that although Edge Security and Domain Services rate equally, this does not necessarily mean that you should deploy them together as child partitions on the same virtual host. Likewise, these example assignments may not meet with your organization’s definition of relative importance. With deployments where cost is a primary factor such as Small Business Server or Essential Business Server, such combinations may be unavoidable, but these should not be deployed without giving serious consideration to alternatives. For instance, since Windows Essential Business Server deploys three separate machines, you might deploy the security server as a child partition on one machine and deploy the remaining servers as child partitions on a completely separate machine. Of course, these decisions have to be made with consideration for the remaining factors which dictate your data center budget, security, functionality and auditing requirements.
Network security V performance
The following diagrams offer simplified forms of the most common network topologies which may be used in virtualized deployments. Each one depicts a specific combination of virtual and physical network associations as well a discussion of the benefits and drawbacks of each design. The use of multiple servers for fault tolerance or load-sharing was omitted for diagram clarity. Where one entity presents a potential network performance bottleneck, load-sharing mechanisms such as NLB should be considered.
The diagram in Figure A1 depicts a configuration that makes use of 801.1Q VLAN tagging to separate the internal and external networks within a single virtual switch which itself is associated with a single NIC connected to a VLAN-capable network switch. All partitions use the same virtual and physical links to reach networks of unlike security context, with their traffic differentiated in the network only by 802.1Q tags. Because the network separation is strictly logical, this creates a scenario where the separation of internal and external traffic can be lost simply by mis-configuration of the associated virtual and / or physical switch. The parent partition management network security is dependent on the same network structure which carries potentially malicious traffic, thus effectively placing the parent and all child partitions at equal risk. Overall network performance is limited by the combination of physical NIC connectivity as well as the processing overhead imposed by the single virtual network structure and the use of 802.1Q VLAN tagging.
The diagram in Figure A2 improves the overall network security and network performance by providing separate virtual and physical network connections for internal and external traffic. Note that since the parent partition management connection remains dependent on the shared connection with the internal network, the network security for the parent partition improves only as much as the network security for the child partitions collectively. The potential for bridging the internal and external networks caused by misconfiguration of the virtual network assignments remains, however.
In Figure A3, the effective security and performance posture for the child partitions has not changed. The parent partition security is improved through separation of the parent partition management network to a host NIC which is not bound to the virtualization network driver and by connecting the parent partition to the internal network through a virtual network associated only with the parent partition and the ISA Server or Forefront TMG server. Thus, ISA Server or Forefront TMG helps protect the parent partition from attacks mounted against the internal network; even those from compromised child partitions on the same host. While it may be possible to use 802.1Q to logically separate the internal and management networks, the security and performance offered by this configuration are no better than that defined in Figure A2.
In Figure A4, the overall network security is improved further by effectively placing ISA Server or Forefront TMG between guest and parent partitions as well as the internal network. The effective network security of the deployment is reduced by the fact that once again, 802.1Q VLAN tagging is used to maintain separation between the guest and parent partitions. In this deployment, ISA Server or Forefront TMG is limited in its ability to protect the parent partition from the child partitions if the virtual VLAN configuration is misconfigured. The network performance of all partitions is completely dependent on the performance offered by the ISA Server or Forefront TMG server.
Figure A5 offers the highest network security possible without the addition of IPsec or other network-layer security mechanisms, such as 802.1x. The child partitions are assigned a virtual network which is completely separate from the virtual network assigned to the Parent partition. Overall performance of the virtual deployment is still dependent on the performance offered by the ISA Server or Forefront TMG server itself, but is improved over that in Figure A4 because 802.1Q VLAN processing overhead for the virtual networks was removed.
Microsoft Knowledge Base article 555975 How to improve Virtual Server Performance
Microsoft Knowledge Base article 832017 Service Overview and network port requirements for the Windows Server system
Microsoft Knowledge Base article 897613 Microsoft Virtual Server Support Policy
Microsoft Knowledge Base article 897614 Windows Server System software not supported within a Microsoft Virtual Server environment
Microsoft Knowledge Base article 897615 Support Policy for Microsoft software running on non-Microsoft hardware virtualization software
Microsoft Knowledge Base article 925476 Network Load Balancing scenarios that are supported for use with Virtual Server 2005 R2
MSDN blog Hyper-V Terminology
TechNet article Best Practices for Performance in ISA Server 2004
TechNet article Best Practices for Performance in ISA Server 2006
TechNet article ISA Server 2000 Performance Best Practices
TechNet article ISA Server EE Capacity Planning
TechNet article Monitoring and Troubleshooting Performance (ISA Server 2004)
TechNet article Best Practices for Logging in ISA Server 2004
TechNet article How to Back Up and Restore an ISA Server Enterprise Configuration (Enterprise Edition)
TechNet article Windows Server 2008 Security Guide
TechNet blog ISA on a Virtual Server host does not protect the guest machines
TechNet blog Virtual Server Performance Tips
Whitepaper Performance Tuning Guidelines for Windows Server 2003
Whitepaper Performance Tuning Guidelines for Windows Server 2008
Whitepaper Windows Server 2008 Hyper-V and BitLocker Drive Encryption
Whitepapers Infrastructure Planning and Design
Whitepapers Microsoft Assessment and Planning Solution Accelerator
Windows Server Catalog; Windows Server Virtualization Validation Program
Windows Server Catalog; Hardware listing
these are excerpted from MSDN blog Hyper-V Terminology
The hypervisor is the lowest level component that is responsible for interaction with core hardware. The hypervisor is responsible for creating, managing and destroying partitions. It directly controls access to processor resource and enforces an externally delivered policy on memory and device access.
A partition is the basic entity that is managed by the hypervisor. It is an abstract container that consists of isolated processor and memory resources - with policies on device access. A partition is a lighter weight concept than a virtual machine - and could be used outside of the context of virtual machines to provide a highly isolated execution environment.
- Root Partition
This is the first partition on the computer. Specifically this is the partition that is responsible for initially starting the hypervisor. It is also the only partition that has direct access to memory and devices.
- Parent Partition
The parent partition is a partition that is capable of calling the hypervisor and requesting that new partitions be created. In the first release of Hyper-V the parent and root partitions are one and the same - and there can only be one parent partition.
- Child Partition
Child partitions are partitions that have been made by the hypervisor in response to a request from the parent partition. There are a couple of key differences between a child partition and a parent / root partition. Child partitions are unable to create new partitions. Child partitions do not have direct access to devices (any attempt to interact with hardware directly is routed to the parent partition). Child partitions do not have direct access to memory (when a child partition tries to access memory the hypervisor / virtualization stack re-map the request to different memory locations).
- Virtual Machine
A virtual machine is a super-set of a child partition. A virtual machine is a child partition combined with virtualization stack components that provide functionality such as access to emulated devices, and features like being able to save state a virtual machine. As a virtual machine is essentially a specialized partition, people tend to use the terms "partition" and "virtual machine" interchangeably. But, while a virtual machine will always have a partition associated with it - a partition may not always be a virtual machine.
- Guest Operating System
This is the operating system / runtime environment that is present inside a partition. Historically with Virtual Server / Virtual PC we would talk about a "host operating system" and a "guest operating system" where the host ran on the physical hardware and the guest ran on the host. With Hyper-V all operating systems on the physical computer are running on top of the hypervisor so the correct equivalent terms are actually "parent guest operating system" and "child guest operating system". Having said that, most people find these terms confusing and instead use "physical operating system" and "guest operating system" to refer to parent and child guest operating systems respectively.
- Virtual Machine Snapshot
A virtual machine snapshot is a point in time image of a virtual machine that includes its disk, memory and device state at the time that the snapshot was taken. It can be used to return a virtual machine to a specific moment in time - at any time. Virtual machine snapshots can be taken no matter what child guest operating system is being used and no matter what state the child guest operating system is in.
- Physical Processor
This is simple. It is the squarish chip that you put in your computer to make it go. This is sometimes also referred to as a "package" or a "socket".
- Logical Processor
This is a single execution pipeline on the physical processor. In the "good old days" someone could tell you that they had a two processor system - and you knew exactly what they had. Today if someone told you that they had a two processor system you do not know how many cores each processor has, or if hyperthreading is present. A two processor computer with hyperthreading would actually have 4 execution pipelines - or 4 logical processors. A two processor computer with quad-core processors would in turn have 8 logical processors.
- Virtual Processor
A virtual processor is a single logical processor that is exposed to a partition by the hypervisor. Virtual processors can be mapped to any of the available logical processors in the physical computer and are scheduled by the hypervisor to allow you to have more virtual processors than you have logical processors.
- Virtual Rack
A virtual rack is a collection of virtual appliances running on one server.