HC-series virtual machine sizes
Applies to: ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets
Several performance tests have been run on HC-series sizes. The following are some of the results of this performance testing.
|STREAM Triad||190 GB/s (Intel MLC AVX-512)|
|High-Performance Linpack (HPL)||3520 GigaFLOPS (Rpeak), 2970 GigaFLOPS (Rmax)|
|RDMA latency & bandwidth||1.05 microseconds, 96.8 Gb/s|
|FIO on local NVMe SSD||1.3 GB/s reads, 900 MB/s writes|
|IOR on 4 Azure Premium SSD (P30 Managed Disks, RAID0)**||780 MB/s reads, 780 MB/writes|
MPI latency test from the OSU microbenchmark suite is run. Sample scripts are on GitHub
./bin/mpirun_rsh -np 2 -hostfile ~/hostfile MV2_CPU_MAPPING=[INSERT CORE #] ./osu_latency
MPI bandwidth test from the OSU microbenchmark suite is run. Sample scripts are on GitHub
./mvapich2-2.3.install/bin/mpirun_rsh -np 2 -hostfile ~/hostfile MV2_CPU_MAPPING=[INSERT CORE #] ./mvapich2-2.3/osu_benchmarks/mpi/pt2pt/osu_bw
The Mellanox Perftest package has many InfiniBand tests such as latency (ib_send_lat) and bandwidth (ib_send_bw). An example command is below.
numactl --physcpubind=[INSERT CORE #] ib_send_lat -a