Due to reader request, we’ve written up a guide on how to configure AMD NVMe RAID and then compared the performance of different NVMe RAID configurations against a single SATA SSD.
The objective is to cover how to configure AMD RAID, show you some benchmarks and then summarise at the end to help you decide if this storage approach is likely to benefit your needs. All of the solid state storage devices used in this project were supplied by ADATA and were brand new with no prior use or wear. I’d also like to thank Jack from ASUS Australia in particular for his support from the very start of the project and through the testing.
When you buy a modern system it will almost always come with an SSD. All SSDs are significantly faster than hard drives with two main types available, SATA and M.2 NVMe. The legacy mSATA is still available but has been replaced by the M.2 socket and the mSATA socket is seldom found on modern day motherboards.
SATA SSDs (regardless of their connector) use the same interface as mechanical hard drives and top out at transfer rates of about 550MB/s. SATA SSDs are available in a 2.5″ form factor like the SU900 drive that you’ll meet in a minute or in an M.2 configuration. NVMe drives transfer their data over PCIe lanes from either the CPU or the motherboard chipset and can run at much faster speeds than the alternative SATA technology.
So we have ascertained that SATA SSDs are fast and can hit speeds up to about 550MB/s, with NVMe drives even faster again hitting speeds around the 3GB/s mark. The next step up after this is RAMDISK where volatile system memory is used but you still have to load the drive image into memory from a non-volatile location on boot. RAMDISK capacities are limited and the cost per GB is prohibitive for large scale uses. This is where the performance benefits of RAID might give power users or enthusiasts the storage speed edge they are looking for.
What is RAID?
RAID is best described at Wikipedia here but to cherry pick the relevant points:
“RAID (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of Independent Disks) is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both…. Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance.”
The main implementations of RAID are:
- RAID 0 – block level striping across all physical disks in the array without redundancy or parity.
- RAID 1 – data is mirrored across two physical disks with full redundancy. This approach does not use striping or parity.
- RAID 10 – a RAID 0 array of nested RAID 1 arrays. A minimum of 4 drives is required for RAID 10. This approach has full redundancy. I didn’t cover RAID 10 in our testing as we only had the 3 drives and the use case is very specific.
SATA hard drives and SSDs have been able to be configured in RAID via motherboard and discrete storage controllers for decades so it isn’t a new concept. Sometimes system builders will use RAID for speed, sometimes for redundancy and sometimes for both. Motherboards typically have between 4 and 8 SATA ports for connecting hard drives or SSDs and configuring a RAID array is generally very simple so long as all the drives are the same capacity and speed.
This is a little more complicated for NVMe RAID due to the fact that each storage device needs PCIe lanes and these can be shared with graphics cards, networking devices, USB and other storage controllers.
Testing AMD NVMe RAID was something we had wanted to do for a while with the Threadripper platform due to the large number of PCIe lanes available from the CPU. We also had some reader questions about how well NVMe RAID scales given that these drives already operate at very high (relatively speaking) speeds in a standard configuration. Another question we’ve had and also seen debated on forums was about the performance hit of RAID 1 where the drives are mirrored.
All these questions will be answered with benchmarks and some step by step instructions on how we configured the ASUS X399 ZENITH EXTREME test System with an AMD Threadripper 2950X processor. The process may differ on other motherboard but the concept should remain the same.
So, what did we test?
We tested RAID 1 and RAID 0 (first with two NVMe drives, then retested with three in the array). We also benchmarked the same SX8200 Pro NVMe drive in NVMe mode and as a single drive in RAID mode to see if the controller mode made any difference to performance. The SU900 SATA SSD was tested in AHCI mode only and is used as a typical reference point for comparing the NVMe storage against the cheaper and slower SATA technology.
It’s a project like this that I really look forward to. I’d like to give a massive shout out to AMD, ASUS, ADATA and Thermaltake for the products that we used.
ASUS ROG X399 ZENITH EXTREME Test Rig Specification
• AMD Threadripper 2950X
• Enermax LIQTECH 240mm water cooler
• 32GB (4x8GB) G.SKILL FLARE-X DDR4 3200
• ASUS ROG X399 Zenith Extreme Motherboard
• ASUS ROG STRIX GTX 1080Ti OC
• ADATA SU900 256GB SSD
• ADATA SX8200 Pro 256GB NVMe (3 drives in AMD NVMe RAID0)
• Seagate Firecuda 2TB 3.5″ HDD
• Corsair RM-850 PSU
• Thermaltake View71 Case
• Logitech G810 keyboard
• Razer DeathAdder Chroma Mouse
• BenQ EX3501R Monitor
ADATA provided 4 SSDs for this project, 1xSATA SSD (an SX900 for a boot drive), and 3xM.2 NVMe drives for the RAID performance testing and instructional purposes. All drives are 256GB in capacity, the detailed specifications are below for reference.
|XPG SX8200 Pro|| ADATA SU900
|Performance(Max)||Read 3500MB/s , Write 3000MB/s|
Maximum 4K random read/write IOPS : up to 390K/380K
* Performance may vary based on SSD capacity, host hardware and software, operating system, and other system variables.
Detailed spec sheet
*Actual performance may vary due to available SSD capacity, system hardware and software components, and other factors
Detailed spec sheet
|Interface||PCIe Gen3x4||SATA 6Gb/s|
|Form Factor||M.2 2280||2.5"|
|Storage temperature||- 40°C - 85°C||-40°C-85°C|
|Available Capacities||256GB / 512GB / 1TB / 2TB||128GB , 256GB , 512GB , 1TB , 2TB|
|Dimensions (L x W x H)||22 x 80 x 3.5 mm||100.45 x 69.85 x 7mm|
|Weight||8g / 0.28oz||59.5g|
|Warranty||5-year limited warranty.|
* The SSD is based on the TBW or Warranty period.
|5-year limited warranty
* The SSD is based on the TBW or Warranty period.
|NAND Flash||3D TLC||3D MLC|
|MTBF||2,000,000 hours||2,000,000 hours|
|Operating temperature||0°C - 70°C||0°C-70°C|
We’ve used the ASUS STRIX GTX 1080 Ti OC in the past and with such a high end system on the table, it seemed appropriate to include that in the build. The STRIX cooler from ASUS is one of the best going around and allows gamers to really push their graphics card without the noise of a tornado coming from their rig. The GTX 1080 Ti in this build hammers out the frames when gaming but barely makes a sound. Again, this would be my first choice of graphics card cooler in a personal build.
With all of this high end kit, the project has been given every chance to deliver the best results possible for a non-overclocked system.