What's New in HPC Pack 2016 Update 2

This document lists the new features and changes that are available in Microsoft HPC Pack 2016 Update 2, comparing with HPC Pack 2016 Update 2. You can download and install HPC Pack 2016 Update 2 in your on-premises computers, or use the deployment templates to create a cluster on Azure with HPC Pack 2016 Update 2.

Mesos integration

HPC Pack can now auto grow-shrink compute resources from a Mesos cluster with the help of open sourced HPC Pack Mesos framework. After enabling this feature, HPC Pack will

  1. Borrow compute nodes resource from Mesos cluster, if
    • HPC Pack has queueing tasks need more resource.
    • Mesos cluster has available resource for HPC Pack.
  2. Returns compute nodes to Mesos cluster, if
    • The node that borrowed from Mesos framework reached idle timeout.

SOA common data for non-domain joined compute nodes

This enables SOA service running on non-domain joined Windows compute nodes to read SOA Common Data from Azure Blob Storage directly without accessing to HPC Runtime share which usually located on the head node. This enables your SOA workloads in a hybrid HPC Pack Cluster with compute resource provisioned in Azure.

Burst to Azure IaaS VM improvements

In HPC Pack 2016 Update 1 we introduced new node template type in HPC Pack "Azure IaaS VM node template". Now we have below improvements:

  1. In order to create a node template for Azure IaaS VM, cluster admin need to have multiple steps to configure Azure Service Principal, Azure key vault Secret and collect info from the Azure portal. Now we simplified all these steps into the step-by-step wizard.

  2. You can now enable Azure Accelerated networking when you create Azure IaaS compute nodes.

  3. Azure node information (such as node size, image name) is now automatically collected through metadata service on the node. After the node being provisioned, you will be able to check these information in the Cluster Manager GUI.

View job/task history on node

Added new parameters of command node view: “/jobhistory”, “/taskhistory”, “/lastrows” and “/when”. Use “/jobhistory” and(or) “/taskhistory” to get job and(or) task allocation history of the node. Use “/lastrows” to specify how many rows in DB to query for getting allocation history, default is 100 rows. Use “/when” to filter the allocation history and check which jobs/tasks were running on the node in the specified time. For example, below command will show latest job history that running on node IaaSCN001:

node view IaaSCN001 /jobhistory

Fast Balanced Scheduling Mode

Fast Balanced mode is a new mode in addition to Queued and Balanced mode. In this mode, cores are allocated to jobs according to their extended priority. Different from the Balanced mode which calculates balance quota among running jobs, the Fast Balanced mode calculate the balance quota among queued and running jobs together and the preemption in this mode happens in node group level and gracefully, so that it can achieve final balance state more efficiently. The Fast Balanced mode has some constraints on job settings. Non-compatible jobs will fail with validation error.

To enable the fast balanced mode you need to run below powershell cmdlet:

set-hpcclusterproperty -schedulingmode FastBalanced

Task to reboot compute node for Cluster admin

Cluster admin now can create and submit job with task that will reboot a compute node when it is finishing. Cluster admin need to have task environment variable CCP_RESTART=True specified. When the task is completed on the compute node, the node will reboot itself and the task will still be "Running" state in the scheduler. When the service restarted from a reboot, the task will then be reported as completed.

This type of task should run on the compute node exclusively so that when it is rebooting the compute node, no other tasks will be impacted.

Lizard updated

Lizard is a tool that runs and tunes Linpack benchmark on HPC Pack cluster. We now have an updated lizard tool that can be downloaded together with HPC Pack 2016 Update 2.

Other improvements

Linux nodes now are enabled in HPC Pack built-in reports for resource utilization, availability.

Below improvements are also available to HPC Pack 2016 Update 1 through QFE

  1. Add a new value “KEEP” for job environment variable HPC_CREATECONSOLE, when this value specified, we will create a new logon console session if not exists or attach to the existing one and keep the console session after the job completes on the compute nodes;

  2. We now generate hostfile or machinefile for Intel MPI, Open MPI, MPICH or other MPI applications on linux nodes. A host file or machine file containing nodes or cores allocation information for MPI applications will be generated when rank 0 task is starting. User could use job or task environment variable $CCP_MPI_HOSTFILE in task command to get the file name, and $CCP_MPI_HOSTFILE_FORMAT to specify the format of host file or machine file. Here is an example how you can use this in your MPI PINGPONG run:

    source /opt/intel/impi/`ls /opt/intel/impi`/bin64/mpivars.sh && mpirun -f $CCP_MPI_HOSTFILE IMB-MPI1 pingpong
  3. By default, scheduler will use job’s runas user credential to do an “Interactive” logon on the compute node. And sometime the “Interactive” logon permission may be banned by your domain policy. We now introduced a new job environment variable "HPC_JOBLOGONTYPE" so that user could specify different logon type to mitigate the issue. The value of job environment variable could be set to 2,3,4,5,7,8,9 as below, more refer to this

    public enum LogonType
            Interactive = 2,
            Network = 3,
            Batch = 4,
            Service = 5,
            Unlock = 7,
            NetworkClearText = 8,
            NewCredentials = 9,
  4. DB connection string plugin improvement: In addition to scheduler service, DB connection string in monitoring service, reporting service, diagnostics service and SDM service will also be refreshed when connection fails so that user can use customized assembly as plugin to refresh DB connection strings in these services.