Insights Testing File Server Performance

Testing File Server Performance

Measuring file server performance can be a little tricky.  To simulate real-world usage, testing a file share from a client on the network is the best way to get a good look at the expected speeds.  However, that test will traverse multiple layers, with the overall speed limited to the slowest link.  At the lowest level you have the speed of the storage on the file server.  Next, there’s the speed of the network, including on the file server side, between the server and the clients, and on the client side.  You also have to consider how fast the storage on the client end is if you will be copying files to or from the client’s disks.

In this post, we are going to look at each of the components that affect the speed clients can read/write to a file server.  After gathering baseline stats, we’ll look at 2 different ways to test the performance of a file server: using the Windows Explorer file copy UI and using the diskspd command line tool.  We’ll find that the speed shown in the Windows Explorer file copy interface isn’t a reliable measurement of the disk throughput on the backend.

The Setup

For testing, I have two VMs in Azure:

  • FS1 – Our file server that we want to test the performance of
    • Windows Server 2016
    • Size: D2s_v3
    • 2 vCPU / 8 GB
    • Maximum disk performance: 4,000 IOPS and 32 MB/s
    • Approx. outbound bandwidth: 1 Gb/s
  • APP1 – Our client we’re testing from
    • Windows Server 2012 R2
    • Size: D8s_v3
    • 8 vCPU / 32 GB
    • Maximum disk performance: 16,000 IOPS and 128 MB/s
    • Approx. outbound bandwidth: 4 Gb/s

The disk performance noted above is for the temporary drive attached to each VM.  The temporary drive in Azure is an SSD disk that’s directly connected to the host node that the VM is running on.  You get decent throughput on this disk without having to spin up (and pay for) a separate data disk.  We’ll be using the temporary drive for our tests (D:)

You may notice that the client machine is a considerably higher spec VM than the server.  This is intentional because we want to ensure that the client machine isn’t a bottleneck while we’re testing.  In a real world deployment, it’s likely the file server would be at least as fast as the clients.

To keep things a little more simple, all of our tests will be writing data to the file server.  In reality, you would want to test a read/write workload that matches the expected usage.

Baseline Stats

Before we test the file share performance, we need to establish baseline read/write performance of both servers and the networking between them.

First, we’ll use diskspd to test the disk performance on both servers.  Diskspd is a versatile Microsoft tool for testing and measuring disk throughput.  We’ll run it 2 different ways on each server to show the max IOPS and throughput on the disks in question:

These results are just about on-par with the maximums listed in the VM information above.  This shows we’re getting the advertised disk speed on both instances.  For those curious, the following commands were used to get these stats:Max IOPS: diskspd.exe -c10G -d30 -b4K -h -o32 -t4 -r -w100 d:\tempfile.dat Max MB/s: diskspd.exe -c10G -d30 -b16K -h -o32 -t4 -r -w100 d:\tempfile.dat

The only difference between the two is the “-b” parameter which modifies the block size.  A smaller block size allows more IO operations per second, but lower MB/s since each IO request is smaller.  This is why we run the test separately for IOPS vs. MB/s.

With the disk speeds established and matching the published specs, we’ll also test the network speed between the two VMs.  For this, we’ll use iperf to ensure we get 4Gb/s when testing the bandwidth from APP1 to FS1.  We won’t test the reverse direction since we’re only concerned with how quickly we can copy data to FS1 in this exercise.

The iperf command we used is:

Iperf3 -c FS1 -t 30 -P 8 -i 0

Setting the -P parameter to 8 will use 8 parallel streams between the client and server.  This helps to maximize the available bandwidth.  We can see above that we got an average of 3.60 Gb/s.  This is a touch less than the advertised maximum of 4 Gb/sec, but it’s close enough for our testing.

Test 1 – Windows File Copy

For our first test, we will copy a large file (a Server 2016 ISO) from APP1 to a share on the D: drive of FS1.  Since the first part of the file copy will be written to the cache on FS1, we will look at the average speed after the copy speed drops off.  Looking at the file copy UI, the speed is bouncing between 10 MB/s and 26 MB/s:

Using the graph and listed speed, I would estimate that the average is roughly 20 MB/s.  However since the file copy UI doesn’t give us a precise result, let’s look at the “Logical Disk\Disk Write Bytes/sec” counter on the file server in perfmon.  After we get past the cache and onto the disks, we see this in perfmon:


At the disk level on the file server, we’re averaging 32 MB/s – just like we would expect based on the baseline stats.  This is considerably different than the file copy UI which is showing notably slower speeds.  Most likely, some of the data is being cached on the file server which is skewing the numbers between the servers.  Either way, it’s clear that the speed in the file copy UI isn’t a good indicator of disk throughput, even if it does accurately show the speed of the SMB copy overal

Test 2- Diskspd

Just like for our baseline stats, we’ll use diskspd to test the raw disk performance.  This time however, we will do it over the network from our client server, APP1.  The command is similar to before, except we are targeting the file share instead of a local path:

diskspd.exe -c10G -d30 -b16K -h -o32 -t4 -r -w100 \\fs1\dshare\tempfile.dat

The results show that over the network, we’re able to achieve the same 32MB/s throughput:

Lessons Learned

Overall, we learned the following things from this testing:

  • The speed listed in the Windows Explorer file copy interface isn’t always indicative of the underlying storage speed.  There is a correlation between the displayed speed and the underlying storage, but there are other factors at play that influence the speed shown in the UI.
  • Diskspd can be used to measure disk performance both locally and over the network.
  • Diskspd can be used to maximize IOPS and throughput, but these may require separate commands to tune the parameters.
  • The listed maximums for the Azure VM sizes very closely match the actual maximums.

For more reading on this topic, check out this blog post by Jose Barreto from Microsoft.  He goes over some of the pitfalls and details of using Windows file copies for performance testing.