適用于 Linux 的 Azure NetApp Files 效能基準測試Azure NetApp Files performance benchmarks for Linux

本文說明 Azure NetApp Files 針對 Linux 提供的效能基準測試。This article describes performance benchmarks Azure NetApp Files delivers for Linux.

Linux 相應放大Linux scale-out

本節說明 Linux 工作負載輸送量和工作負載 IOPS 的效能基準測試。This section describes performance benchmarks of Linux workload throughput and workload IOPS.

Linux 工作負載輸送量Linux workload throughput

下圖代表 64 kibibyte (KiB) 順序的工作負載和 1 TiB 的工作集。The graph below represents a 64-kibibyte (KiB) sequential workload and a 1-TiB working set. 它會顯示單一 Azure NetApp Files 磁片區可以處理 ~ 1600 MiB/s 純連續寫入,以及 ~ 4500 MiB/s 純連續讀取。It shows that a single Azure NetApp Files volume can handle between ~1,600 MiB/s pure sequential writes and ~4,500 MiB/s pure sequential reads.

此圖表說明一次減少10% 的時間,從純讀取到純寫。The graph illustrates decreases in 10% at a time, from pure read to pure write. 它會示範當您使用不同的讀/寫比例時所能預期的情況 (100%:0%、90%:10%、80%:20% 等等) 。It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).

Linux 工作負載輸送量

Linux 工作負載 IOPSLinux workload IOPS

下圖代表 4 kibibyte (KiB) 的隨機工作負載和 1 TiB 的工作集。The following graph represents a 4-kibibyte (KiB) random workload and a 1-TiB working set. 圖形顯示 Azure NetApp Files 磁片區可處理 ~ 130000 純隨機寫入和 ~ 460000 純隨機讀取之間的處理。The graph shows that an Azure NetApp Files volume can handle between ~130,000 pure random writes and ~460,000 pure random reads.

此圖表說明一次減少10% 的時間,從純讀取到純寫。This graph illustrates decreases in 10% at a time, from pure read to pure write. 它會示範當您使用不同的讀/寫比例時所能預期的情況 (100%:0%、90%:10%、80%:20% 等等) 。It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).

Linux 工作負載 IOPS

Linux 擴大Linux scale-up

Linux 5.3 核心啟用 NFS 的單一用戶端向外延展網路功能 nconnectLinux 5.3 kernel enables single-client scale-out networking for NFS-nconnect. 本節中的圖表會顯示 NFSv3 的用戶端掛接選項的驗證測試結果。The graphs in this section show the validation testing results for the client-side mount option with NFSv3. 從19.10 版) 開始,SUSE (從 SLES12SP4) 和 Ubuntu (開始提供此功能。The feature is available on SUSE (starting with SLES12SP4) and Ubuntu (starting with the 19.10 release). 這與 SMB 多重通道和 Oracle Direct NFS 的概念類似。It's similar in concept to both SMB multichannel and Oracle Direct NFS.

圖形會比較 nconnect 與非連接的載入磁片區的優點。The graphs compare the advantages of nconnect to a non-connected mounted volume. 在圖形中,FIO 會從美國西部 2 Azure 區域中的單一 D32s_v3 實例產生工作負載。In the graphs, FIO generated the workload from a single D32s_v3 instance in the us-west2 Azure region.

Linux 讀取輸送量Linux read throughput

下圖顯示 ~ 3500 MiB/s 讀取的連續讀取 nconnect ,大約 2.3 x 非 nconnectThe following graphs show sequential reads of ~3,500 MiB/s reads with nconnect, roughly 2.3X non-nconnect.

Linux 讀取輸送量

Linux 寫入輸送量Linux write throughput

下圖顯示順序寫入。The following graphs show sequential writes. 它們表示 nconnect 順序寫入沒有明顯的優點。They indicate that nconnect has no noticeable benefit for sequential writes. 1500 MiB/s 大約是連續寫入磁片區上限和 D32s_v3 實例輸出限制。1,500 MiB/s is roughly both the sequential write volume upper limit and the D32s_v3 instance egress limit.

Linux 寫入輸送量

Linux 讀取 IOPSLinux read IOPS

下圖顯示 ~ 200000 讀取 IOPS 的隨機讀取 nconnect ,大約3倍非 nconnectThe following graphs show random reads of ~200,000 read IOPS with nconnect, roughly 3X non-nconnect.

Linux 讀取 IOPS

Linux 寫入 IOPSLinux write IOPS

下圖顯示的是 ~ 135000 寫入 IOPS 的隨機寫入 nconnect ,大約3倍非 nconnectThe following graphs show random writes of ~135,000 write IOPS with nconnect, roughly 3X non-nconnect.

Linux 寫入 IOPS

接下來的步驟Next steps