AMD NVMe RAID Explained and Tested!

Establishing a baseline

If you’re not interested in how the arrays were configured and just want the benchmarks, skip to the next page at the bottom.

The initial build utilised 1xSX900 256GB SATA SSD configured as the boot drive, 1x4TB WD Blue mechanical HDD as the mass storage drive and 3xSX8200 Pro 256GB NVMe M.2 SSDs installed on the ASUS X399 ZENITH EXTREME Motherboard.

I installed Windows 10 on the SX900 before doing anything else. Next step was to install all drivers with the exception of the AMD RAIDXpert2 software. I left that until we started actually using RAID. By using a typical setup with a SATA SSD on the motherboard controller in AHCI mode, Windows installation and booting wouldn’t require any special drivers so it should be rock solid for testing. It also meant that I wouldn’t have to reinstall Windows every time I changed the configuration.

The baseline testing was using everything as per the default settings with the BIOS flashed to 1701.

The baseline test results across all 3 M.2 SX8200 Pro SSDs using the NVMe controller (with RAID mode disabled) were consistent with all drives set up as individual storage devices without any RAID in place. This meant that we didn’t have any outliers (good or bad) in the group of three XPG SX8200 Pro SSDs.


First up, I’m going to talk about how to setup the arrays, then I’ll talk through the performance observations and benchmarks in the second half of the article where you can see all the numbers together for the different configurations.

Setting up RAID 1

The linked guide from ASUS explains how to do this very clearly but there is one element that wasn’t documented and I’ll call that out in the steps below.

The Windows 10 RAIDXpert2 software is web based and operates via an APACHE agent. This makes the interface very easy to view and configure. I’ll include some screen shots in the configuration guide below.

BIOS changes

At this point, I’d installed all drives and loaded the Windows 10 Operating System on the SATA SU900 SSD. Windows detected the three XPG SX8200 pro drives without any special BIOS changes. So far this is a standard non-RAID installation but now we want to setup RAID for the NVMe storage only. 

I followed the instructions from the ASUS manual and entered the BIOS setup.

Disclaimer: This is the process I followed for the configuration listed – your experience may vary. Your motherboard, BIOS and configuration may be different so refer to the manufacturer manual when doing anything in BIOS. 

The ASUS guide suggests that the controller for the SATA storage devices (Advanced.. SATA Configuration sub-menu) be set to RAID per the screen shot below but I found that the NVMe RAID worked fine with the SATA configuration in AHCI mode as is typical/recommended for single drive configuration. I also didn’t want to change this as I’d installed windows in AHCI mode for the earlier benchmarks and changing this would have meant a re-install (although there are other hacks to solve this).

I’d seen this question on a forum somewhere and I can confirm that the X399 ZENITH EXTREME does allow SATA RAID and NVMe RAID to be configured independently so the SATA setup can run as AHCI while the NVMe configuration can be in RAID.

The NVMe RAID configuration is in another part of the menu structure under the Advanced menu, AMD PBS sub-menu.

NVMe RAID Mode is set to disabled by default so I had to change this to enabled.

The AMD RAIDXpert configuration isn’t available until NVMe RAID is enabled in the BIOS and reboot.

Before I saved the changes and exit BIOS, there was one more set of changes I needed to make to the Compatibility Support Module (CSM).

In the Boot menu, I selected the CSM option

Per the ASUS instructions, I set this to disabled and then saved my changes, exited BIOS and restarted the system.

Once the system had been, I needed to go back into the BIOS again to access the NVME RAID configuration section. The option for RAIDXpert2 is now available at the end of the Advanced menu and I was able to start setting up the array(s). 

RAID setup

Can’t Create an Array – why?
This is the part that wasn’t in any documentation and to be fair, if I’d just plugged in a set of new SSDs that had never been initialised it wouldn’t have been an issue. The thing was that I’d already initialised and used the SSDs under Windows 10 to establish non-RAID performance statistics at the start of the experiment. A brand new setup won’t need these steps but I’ve included the process I had to follow for anyone who might be adding more drives to an existing typical configuration and wanting to switch to RAID.

When I looked at the array properties, all drives seemed to be in their own arrays of a single drive each. Deleting the file systems through Windows Disk Manager isn’t enough and I needed to delete the arrays through this BIOS function so that the disks could be re-assigned. This is destructive and any data on the drives will be lost as this process effectively sets them back to a blank unformatted default status.

I needed to delete Arrays 3, 4, and 5 to re-initialise the drives for reconfiguration. This was done in the Advanced\RAIDXpert2 Configuration Utility\Delete Array screen where I set Arrays 3, 4, and 5 to “On” and then clicked on Delete Array(s)

Sure enough, under Array Management I had the Create Array option available so I was good to go with our first RAID setup.

If you are planning to implement AMD NVMe RAID, regardless of your technical ability or experience, I’d strongly recommend studying your motherboard manual, reading the AMD RAIDXpert 2 documentation and making sure that you have set aside a reasonable block of time to concentrate and methodically do the job in one sitting.

Playing it Safe with RAID 1

I started with a fully redundant RAID 1 array using 2x256GB SSDs in a mirrored configuration allowing for failure of one drive. This means that the size of the array is 256GB in total and we are sacrificing capacity of one drive and a little performance for security of the data.

After selecting Create Array from Advanced\RAIDXpert2 Configuration Utility\Array Management, I had to select the RAID Level and chose RAID 1.

The next step is to select the physical disks where I selected disks 1 and 2 then clicked Apply changes to return to the previous menu

The size was 255,406MB as expected because I’m mirroring, not spanning the storage space available.

I went with the default settings for CacheTagSize, Read Cache Policy and Write Cache Policy then hit Create Array

The new RAID 1 array appeared in the list of available arrays and I was able to see the properties per the screen shot below

At this point I was almost good to go and the array started to initialise. This is a little misleading though as the array isn’t really ready to use. The system indicates that it is and you can access the drive but performance will be poor until the array has initialised properly.

I booted to Windows and initialised the new logical drive.

The next step was to install the RAIDXpert2 drivers and restart the system. Once the system booted, I was able to open the software via a web interface, set a username and password for admin purposes and watch as the array initialisation completed. This took about 10-15 minutes for a dual drive RAID 1 configuration. The first screen shot shows the array still being prepared while the second screenshot shows it fully operational.

Then it was benchmarking time but we’ll hold off on the comparison until the end and look at the results in context with all of the rest.

Please note that whilst RAID 1 provides a level of protection against a drive failing, this is no substitute for backing up. RAID 1 just means that if a drive dies, you can still access your data for the time until you replace the failed drive. This is more of a mission critical/business continuity thing than anything else. I’d probably recommend keeping a backup or sync folder to mechanical HDD storage rather than duplicating NVMe storage in this way.

“Risking it” with RAID 0

I wanted to see the scaling and performance differences between running 2x256GB NVMe SSDs in RAID 0 and then 3x256GB SSDs. AMD NVMe RAID will scale up to seven devices but for this exercise I only have three due to some “first world problems.”

This means that the data is spread across both drives providing improved read/write performance but at double the risk of complete data loss. If one drive fails, all data will be lost. Whilst this might seem reckless, it can be the way to go if you are careful about what you store on the array. Programs (especially Steam/Origin libraries), temp files and working data that is backed up either in real-time or regular intervals will mitigate the risk of data loss and give you performance levels in excess of a single large capacity NVMe SSD.

Next step was to delete the RAID 1 array and reconfigure the drives in a RAID 0 setup. Back into the BIOS we go…

First stop is the Array Management 

I needed to delete the RAID 1 array and re-use those SSDs for the spanned RAID 0 configuration.

After selecting the RAID 1 array, the ROG BIOS wanted to make sure I knew what I was doing and asked for the obligatory confirmation.

With two available SX8200 pro SSDs I was able to create that new RAID 0 array. As with the RAID 1 setup earlier, I had to select the RAID level (RAID 0 this time) and the disks I wanted to use.

Then I chose the defaults like last time for consistency.

After creating the array, I verified the configuration by checking the properties.

When configuring the SSD’s in RAID 0 we end up with an array of 510 GB in capacity. The process is pretty straightforward and almost identical to when we set up RAID 1. As far as Windows is concerned, the logical volume is treated the same and I had to re-initialise the file system. I’ve also included a RAIDXpert screen shot below.

In this configuration we’re able to use all the capacity and theoretically more access to the collective speed of the two storage devices with the only risk being that of data protection in the event of a failure.

“Living Dangerously”
(RAID 0 with three ADATA SX8200 Pro NVMe SSDs )

Not that I’m trying to labour the point but I would never recommend using a striped array (RAID0) for an operating system or any critical data that is not regularly backed up – regardless of your selected brand and their track record. RAID 10 provides the best of RAID 0 and RAID 1 by mirroring the striped drive. It also doubles the cost of your storage devices and is unlikely to be worthwhile for almost all enthusiasts. It’s probably cheaper and more realistic to simply use high-speed NVMe drives and invest in some decent backup or synchronisation software to keep your data safe.

The process was the same as when I configured RAID 0 with two drives except that I’m now using all three for a grand total of 766GB of space.

Managing the Risk of a RAID 0 

Cloud Storage

The use of cloud services such as Google Drive, OneDrive, DropBox and others also reduce the risk of data loss for any PC so long as you have a constant Internet connection for the sync to occur.

I’d use an array like this for my games to reduce load times. Games are getting a lot bigger now with larger textures, high quality audio, etc. and it isn’t uncommon for AAA titles to be 50-90GB. Thanks to launchers like Steam, Origin, Battlenet and others, backing up the game files and restoring them has never been easier. Basically, if I had to recover from an array failure it would be an inconvenience but not a disaster and certainly worth the risk if the performance gain is there. Seriously, the less time sitting in front of a “loading screen” the better, right?

M.2 – It’s all about “Location, Location, Location…”

The motherboard location of your M.2 slots is even more important when using NVMe RAID 0. Not all motherboards have the M.2 slots in ideal places or provide protection against heat from other components such as graphics cards or stock CPU coolers where the fan blows directly down onto the CPU socket. I’ve had an M.2 drive fail due to excessive heat from a graphics card that was installed over the top of the M.2 slot – despite a thin heat spreader.

ASUS implemented the ROG DIMM.2 slot near the system memory DDR4 slots in the top right corner of the X399 ZENITH EXTREME motherboard. The DIMM.2 module allows you to mount 2 M.2 drives to the module and then insert it like a RAM module to the board. The location places it in a position that typically has case airflow from the front intake fans and far enough away from any other heat sources like VRMs or graphics cards.

The central M.2 socket is under the chipset heat sink and managed with a thermal pad. installing the SSD in this location is a little more involved but still a quick and easy exercise. The key point here is that if you do use the ASUS X399 ZENITH EXTREME, look for a ‘clean’ NVMe SSD without a heat sink attached as clearance is likely to be an issue otherwise.

In the current configuration, all SSDs are in cool locations on the motherboard and the risk of heat induced failure has been mitigated as much as technically possible.

Other Considerations

It is also possible to boot from an NVMe RAID array but you need to use a driver in the Windows installation process so that the operating system can recognise the storage controller and logical drive.

The steps for this are outlined towards the end of the linked PDF for an ASUS X399 ZENITH EXTREME motherboard. This is no different to what was previously required when using SATA RAID or discrete RAID controllers going back to Windows 2000 (or even earlier). You just have to make sure that you have the appropriate driver per the instructions on a USB so that you can tell the Windows installer where to find it. Once loaded, the Windows installer can find the RAID array as a logical volume so that you can select it as your system volume.

I wasn’t able to easily unmount the AMD RAID drivers to return the drives to being accessible in Windows as standard NVMe SSDs. I ran through the process a number of times and even reached out to ASUS to make sure I wasn’t doing something stupid. A re-installation was able to see the SSDs without any issues. So keep in mind that if you want to dabble in the world of NVMe RAID on your X399 platform, you might have to reinstall your Operating System in order to go back if you change your mind.



Please enter your comment!
Please enter your name here

Captcha loading...

This site uses Akismet to reduce spam. Learn how your comment data is processed.