Before we get into this post let me be up front and say that this is a completely unsupported configuration by Microsoft. Much like UnRAID or ZFS, storage spaces wants direct access to the disk to work properly. You can force Storage spaces to work with RAID volumes, but if you have problems MS support will not assist. Your biggest issue will be handling failures, as Storage spaces will not be able to accurately predict SMART failures.
Now with that out of the way. Here is the setup for this. Dell T710 as the server, Perc H700 with 512MB BBWC, 2x Samsung 850 EVO + 2x 500 GB Seagate Constellation ES SATA drives in a Tiered mirror space, and 4 500GB Seagate Constellation ES + 4 1TB WD Black SATA disks in a Parity space with 30GB SSD Write Cache. I wanted to compare how drives configured on a RAID controller that had its own built in write cache would stack against my configuration.
We setup the test the same as the earlier test on my homelab server. 100GB Testfile from IOMeter and a 75% Read 512 and 75% Read 4k. I only tested against the host, not the VM, and we tested against both the Mirrored tiered drives, and the Parity array. Here are the results.
SERVER TYPE: Dell T710
CPU TYPE / NUMBER: Xeon x5560 x2
HOST TYPE: Server 2016
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Storage Spaces Tiered Mirror
Test name |
Latency |
Avg iops |
Avg MBps |
cpu load |
512 B; 75% Read; 0% random |
0.12 |
8601 |
4 |
0% |
4 KiB; 75% Read; 0% random |
0.15 |
6742 |
26 |
0% |
SERVER TYPE: Dell T710
CPU TYPE / NUMBER: Xeon x5560 x2
HOST TYPE: Server 2016
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Storage Spaces Parity
Test name |
Latency |
Avg iops |
Avg MBps |
cpu load |
512 B; 75% Read; 0% random |
0.11 |
8638 |
4 |
0% |
4 KiB; 75% Read; 0% random |
0.16 |
6169 |
24 |
0% |
When we compare these figures to my setup we see a couple things. Firstly the CPU load on the host is considerably less. I was seeing between 10-15% CPU utilization during my tests, because my CPU has less compute power than a single Xeon, let alone 2.Next we notice that on average my latency was lower. This is due to the fact that I am using NVMe Cache instead of just SATA SSD cache for my tiers. Lastly we look at average IOPS and throughput which again are significantly higher on my system because we are seeing the NVMe cache really help things out.
Conclusions:
A RAID controller with its own dedicated cache on the card, not only is an unsupported configuration, however also really isn’t a substitute for NVMe write cache. Additionally Parity spaces greatly benefit from more powerful CPUs in terms of overall system performance. Replacing my stock i7-920 with an i7-965 with a QPI Bus speed of 1×6.2k should help cut the CPU overhead down some, and is about a $75 upgrade these days.
Like this:
Like Loading...
Before we get into this post let me be up front and say that this is a completely unsupported configuration by Microsoft. Much like UnRAID or ZFS, storage spaces wants direct access to the disk to work properly. You can force Storage spaces to work with RAID volumes, but if you have problems MS support will not assist. Your biggest issue will be handling failures, as Storage spaces will not be able to accurately predict SMART failures.
Now with that out of the way. Here is the setup for this. Dell T710 as the server, Perc H700 with 512MB BBWC, 2x Samsung 850 EVO + 2x 500 GB Seagate Constellation ES SATA drives in a Tiered mirror space, and 4 500GB Seagate Constellation ES + 4 1TB WD Black SATA disks in a Parity space with 30GB SSD Write Cache. I wanted to compare how drives configured on a RAID controller that had its own built in write cache would stack against my configuration.
We setup the test the same as the earlier test on my homelab server. 100GB Testfile from IOMeter and a 75% Read 512 and 75% Read 4k. I only tested against the host, not the VM, and we tested against both the Mirrored tiered drives, and the Parity array. Here are the results.
When we compare these figures to my setup we see a couple things. Firstly the CPU load on the host is considerably less. I was seeing between 10-15% CPU utilization during my tests, because my CPU has less compute power than a single Xeon, let alone 2.Next we notice that on average my latency was lower. This is due to the fact that I am using NVMe Cache instead of just SATA SSD cache for my tiers. Lastly we look at average IOPS and throughput which again are significantly higher on my system because we are seeing the NVMe cache really help things out.
Conclusions:
A RAID controller with its own dedicated cache on the card, not only is an unsupported configuration, however also really isn’t a substitute for NVMe write cache. Additionally Parity spaces greatly benefit from more powerful CPUs in terms of overall system performance. Replacing my stock i7-920 with an i7-965 with a QPI Bus speed of 1×6.2k should help cut the CPU overhead down some, and is about a $75 upgrade these days.
Share this:
Like this: