What's New in HPC Pack 2016 Update 1

This document lists the new features and changes that are available in Microsoft HPC Pack 2016 Update 1, comparing with HPC Pack 2016 RTM. You can download and install HPC Pack 2016 Update 1 in your on-premises computers, or use the deployment templates to create a cluster on Azure with HPC Pack 2016 Update 1.

Documentation for new scenarios in HPC Pack 2016 Update 1 is forthcoming.

Removed dependency on Service Fabric for single head node installation

Service Fabric is not required for installing a cluster with a single head node. However, it is still required for setting up a cluser with 3 head nodes.

Azure Active Directory integration for SOA jobs

In HPC Pack 2016 RTM, we added Azure Active Directory support for HPC Batch jobs. Now we add support for HPC SOA jobs.

Burst to Azure Batch improvements

In this version, we improve bursting to Azure Batch pools including support for low priority VMs and Linux VM pools.

Use Docker in HPC Pack

HPC Pack is now integrated with Docker. Job users can submit a job requesting a Docker image, and the HPC Job Scheduler will start the task within Docker. NVIDIA Docker GPU jobs and cross-Docker MPI jobs are both supported.

Manage Azure Resource Manager VMs in HPC Pack

One of the advantages of HPC Pack is that you can manage Azure compute resources for upir cluster through node templates, in the same way that you manage on-premises resources. In addition to the Azure node template (for PaaS nodes) and Azure Batch pool template, we introduce a new type of template: Azure Resource Manager (RM) VM template. It enables you to add, deploy, remove, and manage RM VMs from a central place.

Support for Excel 2016

If you’re using Excel Workbook offloading in HPC Pack 2016 Update 1, Excel 2016 is supported by default. The way you use Excel Workbook offloading hasn’t been changed from earlier versions of HPC Pack. If you use Office 365, you need to manually activate Excel on all compute nodes.

Improved autogrow/shrink operation log

Previously you had to review management logs to check what was happening to the autogrow/shrink operations for compute resources, which is not very convenient. Now you canm read the logs within the HPC Cluster Manager GUI (Resource Management > Operations > AzureOperations). If autogrow/shrink is enabled, you see one active “Autogrow/shrink report” operation every 30 minutes. This operation logs is never archived and instead is purged after 48 hours. This behavior differs from other operations.

Note that this feature already exists in HPC Pack 2012 R2 Update 3 with QFE4032368.

Improved Linux mutual trust configuration for cross-node MPI jobs

Previously when a cluster user submitted a cross-node MPI job, they had to provide a key pair XML file generated through hpccred.exe setcreds /extendeddata:<xml>. Now this is not required, because HPC Pack 2016 Update 1 generates a key pair for the user automatically.

Peek output for a running task

Before HPC Pack 2016 Update 1, you were only able to see the last 5K of output from the task if the task did not specify output redirection. And if the task redirected to a file, you were not able to know the output of a running task. This situation was worse if your job and task were running in Azure nodes because your client could not access the Azure nodes directly.

Now you can view the latest 4K of output and standard error by going to the View Job dialog box from HPC Job Manager. Select a running task and click Peek Output.

You can also use the HPC Pack command task view <jobId>.<TaskId> /peekoutput to get the latest 4K of output and standard error from the running task.

Considerations:

  • This feature does not work if your task is running on an Azure Batch pool node.
  • Peek output may fail if your task redirects the output to a share that the compute node account cannot access.

Backward compatibility

In HPC Pack Update 1 we add compatibility with previous versions of HPC Pack. The following scenarios are supported:

  1. With the HPC Pack Update 1 SDK, you can connect to previous versions (HPC Pack 2012 and HPC Pack 2012 R2) of an HPC Pack Cluster and submit and manage your Batch jobs. Note that the HPC Pack Update 1 SDK does not work for HPC Pack 2016 RTM clusters.
  2. You can connect to HPC Pack 2016 Update 1 from previous versions (HPC Pack 2012 and HPC Pack 2012 R2) of the HPC Pack SDK to submit and manage batch jobs. Please note that this only works if your HPC Pack 2016 Update 1 cluster nodes are domain joined.

The latest HPC Pack SDK is available on NuGet.

The HPC Pack 2016 Update 1 SDK adds a new method in IScheduler as shown below to allow you to choose the endpoint you want to connect to: WCF or .NET remoting. Previously, the Connect method first tried to connect to a WCF endpoint (version later than HPC Pack 2016 RTM) and if that failed it tried a .NET remoting endpoint (version before HPC Pack 2016)

public void Connect(string cluster, ConnectMethod method)

SqlConnectionStringProvider plugin

HPC Pack 2016 Update 1 supports a plugin to provide a customized SQL connection string. This is mainly for managing the SQL connection strings in a separate security system from HPC Pack 2016 Update 1. For example, use the plugin if the connection strings change frequently, or they contains secret information that it is improper to save in the Windows registry or the Service Fabric cluster property store.

New HPC Pack Job Scheduler REST API

While we keep the original HPC Pack REST API for backward compatibility, we introduce a new set of REST APIs for Azure AD integration and have added JSON format support.

SOA performance improvement

The performance of service-oriented architecture (SOA) jobs has been improved in this release.