What's New in HPC Pack 2016
This document lists the new features and changes that are available in Microsoft HPC Pack 2016.
Operating system and software requirements
HPC Pack 2016 has an updated set of requirements for operating system and other prerequisite software. Among other updates, HPC Pack 2016 provides support for Windows Server 2016 on the head node and several other node roles.
For the head node role, HPC Pack 2016 is not supported to run on Windows Server 2012.
In HPC Pack 2016, we have migrated our head node services from the Failover Clustering Service to the Service Fabric Service. You can now deploy a highly available HPC Pack cluster much more easily in Azure or on-premises. See the Get started guide for Microsoft HPC Pack 2016 to create a highly available HPC Pack cluster on-premises. If you want to deploy a highly available HPC Pack cluster in Azure, see Deploy an HPC Pack 2016 cluster in Azure.
Azure Active Directory integration
With previous versions of HPC Pack set up in Azure virtual machines, you needed to set up a domain controller for your HPC cluster. This is because HPC Pack requires Active Directory authentication for cluster administrators and cluster users. In HPC Pack 2016, the administrator can alternatively configure Azure Active Directory for cluster authentication. For more details, see Manage an HPC Pack cluster in Azure using Azure Active Directory.
Enhanced GPU support
Since HPC Pack 2012 R2 Update 3 we have supported GPUs for Windows compute nodes. HPC Pack extends the support to include Linux compute nodes. With the Azure N-Series VM size, you’re able to deploy an HPC Pack Cluster with GPU capabilities in Azure. For more details, see Get started with HPC Pack and Azure N-Series VMs.
Hold job - Now in the job management UI (HPC Job Manager), you can hold an active job with a hold-until date and time. The queued tasks within the active job are held from dispatching. And if there are any running tasks in the job, the job state is marked as Draining instead of Running.
Custom properties page - In the Job dialog, you can now view and edit a job’s custom properties. And if the value of the property is a link, the link is displayed on the page and can be clicked by the user. If you would like a file location to be clickable as well, use the format
file:///<location>, for example,
Substitution of mount point - When a task is executed on a Linux node, the user usually can’t open the working directory. Now within the job management UI you can substitute the mount point by specifying the job custom properties linuxMountPoint and windowsMountPoint so that the user can access the folder as well. For example, you can create a job with the following settings:
- Custom Property:
linuxMountPoint = /gpfs/Production
- Custom Property:
windowsMountPoint = Z:\Production
- Task Working Directory:
Then when you view the job from GUI, the working directory value in the Job dialog > View Tasks page > Details tab will be
z:\production\myjob. And if you previously mounted the
/gpfsto your local Z: drive, you will be able to view the job output file.
- Custom Property:
Activity log - Job modification logs are now also logged in the job’s activity log.
Set subscribed information for node - The Administrator can set node subscribed cores or sockets from the GUI. Select offline nodes and perform the Edit Properties action.
No copy job – If you specify the job custom property noGUICopy as
true, the Copy action on the GUI will be disabled.
Task execution filter - HPC Pack 2016 introduces a task execution filter for Linux compute nodes to enable calling administrator-customized scripts that each time a task is executed on Linux nodes. This helps to enable scenarios such as executing tasks with an Active Directory account on Linux nodes and mounting a user's home folder for task execution. For more information, see Get started with HPC Pack task execution filter.
Release task issue fix – HPC Pack 2016 fixes the issue that a job release task may not be executed for exclusive jobs.
Job stuck issue – HPC Pack 2016 fixes an issue that a job may be stuck in the Queued state.
4 MB message limit removed - Now in SOA requests you can send requests that are larger than 4 MB in size. A large request will be split into smaller messages to persist into MSMQ, which has the 4MB message size restriction.
HoldUntil for SOA sessions - For a SOA session, users can now pause a running session by modifying a session job's HoldUntil property to a future time.
SOA session survival during head node failover
SOA sessions can run on non-domain-joined compute nodes - For non-domain-joined compute nodes, the broker back-end binding configuration in the service registration file can be updated with None or Certificate security.
New nethttp transport scheme - The nethttp is based on WebSocket, which can greatly improve message throughput compared with basic HTTP connections.
Configurable broker dispatcher capacity - Users can specify the broker dispatcher capacity instead of the calculated cores. This achieves more accurate grow and shrink behavior if the resource type is node or socket.
Multiple SOA sessions in a shared session pool - To specify the pool size for a SOA service, add the optional configuration
<service maxSessionPoolSize="20">in the service registration file. When creating a shared SOA session with the session pool, specify both
true. And after using this session, close it without purging to leave it in the pool.
Updated EchoClient.exe - Updates for random message size and time, flush per number of requests support, message operation (send/flush/EOM/get) timeout parameter, and new nethttp scheme support.
Extra optional parameters in ExcelClient.OpenSession method for Excel VBA - Extra parameters include
Added GPU type support for SOA session API
Miscellaneious stability and performance fixes in SOA services
Autogrow/shrink service supports Linux nodes - When HPC Pack cluster is deployed in Azure virtual machines.
New property for autogrow/shrink service - The ExcludeNodeGroup property enables you to specify the node group or node groups to exclude from automatic node starts and stops.
Built-in REST API Service – Now the REST API Service is installed on every head node instance by default.
Non-domain-joined Windows compute nodes – The cluster administrator can set up a Windows compute node which is not domain-joined. A local account will be created and used when a job is executed on this type of node.