[Benchmarks] Performance of Intel RST NVMe RAID0 configurations

@100PIER :
Meanwhile I succeeded trying to create an Intel RAID array consisting of 2x250 GB Samsung 960 EVO SSDs (downgrading the mainboard BIOS from v7.30 to v2.00 did the trick).
The NVMe RAID0 system is very stable and seems to be much snappier than before as non-RAID NVMe SSD.
Very interesting: There is no NVMe Controller listed within the Device Manager. Everything is managed by the “Intel Chipset SATA RAID Controller” (DEV_2822) and its driver v15.2.10.1044.

Here are the results of a first benchmark test:


And here are the details about the settings shown by the Intel RST Console v15.2.10.1044:

Intel RST Console v15.2.10.1044 with NVMe RAID0 details.png

@Fernando ,
I am very happy you finally succeeded. Very good job !
Very strange you have to downgrade the mboard BIOS to P2.00 ?
Do you have created RAID0 within UEFI BIOS or under W10 with Intel RST Console ?
If no NVMe Controller is listed within the Device Manager does it mean if Samsung NVMe driver were never installed previously the RAID0 was created, the configuration is also working ?
The ANVIL score is very excellent and quite similar to a single 960 PRO 1TB (16125) solution.

Yes, this was the advice given by the ASRock Technical Support until they can offer an updated BIOS v7.xx.

The RAID array was created from within the BIOS by using the Intel RST RAID Utility v15.2.10.1044.

No NVMe driver has been installed and no NVMe Controller is listed. So there is no worry about the Samsung NVMe driver problems.

Once I have the updated BIOS with full support of the NVMe RAID0 I will test the Intel RST driver v14.8.12.1059 as well and look, whether this driver is better performant.

After having slightly optimized my RAID0 system (disabled unneeded tasks running in the background), I got these benchmark results:


@Fernando ,
This score becomes the reference for RAID0 mode NVMe boot solution.

@Fernando ,
Here is a report about a NVMe RAID0 960PRO 2TB bootable solution running Intel Chipset SATA/PCIE RST PREMIUM Controller on a Z370 machine. (v16.0.1.1018 WHQL)




You can compare with a proprietary solution (HighPoint) using a RAID0 add-in card running into the same machine:


ANVIL does appear better for “PREMIUM” solution 16203 vs 14046 only for “HPT” solution.
A contrario
ATTO does appear far better for “HPT” solution 3900/4555 vs only 2980/3625 for “PREMIUM” solution.

You can compare ANVIL score with a non RAID solution (960PRO 1TB) running on a old X99 platform: => 16414


NVMe RAID0 solution does offer apparently far less performance than we can expect.
The ‘normal’ ratio RAID0/nonRAID should be about 2, and we get < 1 !

So, what is a benefit of a RAID0 NVMe vs a non RAID NVMe solution (bootable or not bootable) ?

@100PIER :
Thank you very much for your interesting report.
Since it matches better the “RAID Performance” than the “Storage Drivers” Sub-Forum, I have created here a new thread and put your and my NVMe RAID benchmark results into it.
If I should find other posts with a similar topic somewhere else, I will move them into this new thread.
I hope, that you agree with the movement of your post.

@Fernando ,
Thanks for the creation of this topic.
It will be very useful mainly when VROC and VROC like solutions will be more mature. (I have not seen lot of posts on the forum about these technology.)

Do you have any idea why the NVMe RAID0 performances are not at the RDV ?
Do you think I have missed a tips somewhere ?
All the RAID settings were done from the BIOS very easily (I have captured a ten of photos when setting specific parameters for each M.2 devices). Intel “Premium” Software set was not installed, only pure driver were installed.
It is strange that HWINFO does not detect RAID but independant SSD devices !
Roughfly the RAID 0 performance is equal to a standalone SSD performance…
No throottle, Temperature is quite perfect < 48°C max.

So I have frankly no idea why this limitation.
May be a conceptual architectural NVMe boottleneck/limit when 2 fast NVMe devices are organised into RAID0. I do use a 44 PCIe lanes CPU (i7-8700K), so I can’t do better.
960PRO Firmware SSDs are identical to the recommended version, and I have no Samsung NVMe driver installed.

I don’t know, what it caused (maybe a software or even a hardware issue), but your results are not normal and you should be restrained regarding the general evaluation of your test results.
My own results verify, that a RAID0 with 2 identical NVMe SSDs as members are much better performant than a single NVMe SSD of the same model.

@Fernando ,
I have now installed and used the last Intel "Premium" v16.0.2.1086 driver and get better results but always a low 1.06 RAID0/nonRAID ratio instead of 1.8 expected.



On your side for your single 960EVO 250GB what ANVIL score do you have ? I assume 10000 ?
Your RAID0 960EVO (2x250GB) ANVIL score is about 16000.
So, you should get a ANVIL score at least 20000 or may be more.

On my side, a single 960EVO 500GB the ANVIL score is about 12200


On my side, a single 960PRO 1TB the ANVIL score is about 16000, and my expectation for a RAID0 2TB solution is about 30000, and I get ‘only’ 17100:



So, do you have any idea about NVMe RAID0 performance abnomaly ?

I have benchmarked NVMe RAID0 solutions on a X99 and on a Z370 machines.
I do observe and measure for the both different platforms the same abnormal performance for RAID0 NVMe M.2.

My current assumption is there somewhere a conceptual RAID0 NVMe problem reported and visible on different architecture and chipsets.

@100PIER :
You cannot double the total benchmark scores of 2 single NVMe SSDs by creating a RAID0 array.
1. A RAID array is not able to read and write very small files (e.g. 4KB ones) faster than the single (non-RAIDed) Disk Drive.
2. The faster the single Disk Drive is, the smaller is the performance boost by creating a RAID0 array consisting of such Disk Drives.
3. Only with HDDs it is possible to double the READ and WRITE scores, but this is only valid for the processing of big or very big sized files.

@Fernando ,

Thanks for your explanation.
Yes, my "RAID0/non-RAID" ratio calculation was based on HDDs SAT6G experience with big files.

So, with SSDs it does seem another story !!

Looking at the Diskmark or ATTO benchs you can observe for M.2 NVMe RAID0 interesting results :



Samsung 960PRO 1TB Specifications are for Sequential Read/Write: 3500/2100
M.2 NVMe "Intel Premium" RAID0 with two Samsung 960PRO 1TB members does offer, at the best conditions, for Sequential Read/Write about: 3665/3100

So the benefit for such M.2 NVMe Premium RAID0 is about 1.48 for WRITE (big files) operations , and only 1.05 at the best for any READ operations.

Personnal Conclusion for M.2 NVMe RAID0 SSDs (whatever the solution is used Intel "Premium" or VROC like):

True benefit for WRITE, no benefit for READ operations.
The consequence is a M.2 NVMe RAID0 W10 bootable does seem a economical/performance non sense, a contrario a M.2 NVMe RAID0 storage solution does make sense (when needed) for big files and very high speed applications.
Booting performance from a M.2 NVMe RAID0 does get no benefit in comparaison from the single M.2 NVMe device solution.
May be locating the data write operations on the system disk does make sense to get the write M.2 NVMe RAID0 better performances (for some specific applications).

@100PIER :
I agree with you.
The creation of a RAID0 array consisting of natively already very performant SSDs only makes sense, if the main task is the processing of very big sized files (like Video Encoding).
By the way:
1. While working on Office tasks or browsing in the Internet you will not recognize any performance difference between a RAID0 consisting of 2 or more PCIe/M.2 connected SSDs and a single PCIe/M.2 connected SSD.
2. The user, who wants the best possible performance, should consider the enhanced risk to lose all RAID0 data, if any of the RAID0 array members should die.

Fernando, Yes I agree totally with you.
Using a RAID0 M.2 bootable solution also as a data storage is risky without a performant backup application.
I have tested several “system backup applications” and very few of them does recognize or accept NVMe RAID0 configuration !
Does VROC technology seem a pure ‘marketing concept’ without any real benefit for the users ?

There are a lot of users, who just want to see or to show someone else their extremely high benchmark scores without taking care about whether they benefit from the high scores while doing their daily work.
A much cheeper solution for such users would be to run their single SSD in “RAPID Mode” using Samsung’s Magician.

Since there are new Intel RST drivers of the RST v16 and v17 platform and a brandnew Win10 19H1 Insider Build 18323 available, I decided to do a clean install of the new Win10 19H1 Build onto my already existing RAID0 array and to repeat the benchmark comparison tests, which I had published a few days ago.
I wanted to know,
a) whether the NVMe RAID0 performance depends on the in-use NVMe driver version and - if yes -
b) which is the best performing Intel NVMe driver for an Intel RST NVMe RAID0 array

Test system:

  • Chipset: Intel Z170 (mainboard: ASRock Fatal1ty Z170 Prof. Gaming i7, BIOS version: 7.50)
  • RAID0 members: 2x250 GB Samsung 960 EVO SSDs
  • Usage of the RAID0 array: as system drive
  • OS: Windows 10 x64 Pro Insider Preview Build 18323, freshly installed in UEFI mode
  • Intel RaidDriver BIOS module: RST v15.5.1.3017
  • RAID0 stripe size: 32KB

Tested NVMe drivers:
  1. Intel RST RAID driver v14.8.18.1066 WHQL dated 09/06/2017
  2. Intel RST RAID driver v15.5.2.1054 WHQL dated 04/24/2017
  3. Intel RST RAID driver v15.5.5.1059 WHQL dated 06/01/2017
  4. Intel RST RAID driver v15.44.0.1010 dated 02/07/2018 (= Win10 in-box Intel RAID driver since RS4)
  5. Intel RST RAID driver v15.9.4.1041 WHQL dated 03/20/2018
  6. new: Intel RST RAID driver v16.7.10.1030 WHQL dated 11/16/2018
  7. Intel RST RAID driver v16.8.0.1000 WHQL dated 12/03/2018
  8. new: Intel RST RAID driver v17.0.1.1075 WHQL dated 12/28/2018

Here are my test results:
  1. Intel RST driver v14.8.18.1066:


  3. Intel RST driver v15.5.2.1054:


  5. Intel RST driver v15.5.5.1059:


  7. Intel RST driver v15.44.0.1010:


  9. Intel RST driver v15.9.4.1041:


  11. Intel RST driver v16.7.10.1030:


  13. Intel RST driver v16.8.0.1000:


  15. Intel RST driver v17.0.1.1075:


Ranking of the best performing Intel NVMe RAID drivers for my Z170 System:
  1. Intel RST drivers v16.8.0.1000
  2. Intel RST drivers v16.7.10.1030
  3. Intel RST drivers v15.9.4.1041
  4. Intel RST driver v15.5.2.1054

The measured benchmark scores were so close together, that I doubt, that the user will notice the performance differences.

@100PIER and other interested users:

Since there are meanwhile some new Intel RST drivers and a new final Win10 version available, I have yesterday done some new benchmark comparison tests with my Z170 system.

Test system:
The same as in January 2019 (post #17), except the OS, which was now Win10 x64 Pro v19H1 Build 18362.175, freshly installed in UEFI mode.

Tested Intel RST RAID drivers::

  1. Intel RST v14.8.18.1066 WHQL dated 09/06/2017 (latest from the v14 platform)
  2. Intel RST v15.44.0.1010 dated 02/07/2017 (Win10 in-box RAID driver since RS4)
  3. new: Intel RST v15.9.6.1044 WHQL dated 03/01/2019 (latest from the v15 platform)
  4. new: Intel RST v16.8.2.1002 WHQL dated 02/27/2019 (latest from the v16 platform)
  5. new: Intel RST v17.2.6.1027 WHQL dated 03/19/2019
  6. new: Intel RST v17.2.11.1033 WHQL dated 05/07/2019 (currently latest from the v17 platform)

Here are my test results:
  1. Intel RST driver v14.8.18.1066:


  3. Intel RST driver v15.44.0.1010:


  5. Intel RST driver v15.9.6.1044:


  7. Intel RST driver v16.8.2.1002:


  9. Intel RST driver v17.2.6.1027:


  11. Intel RST driver v17.2.11.1033:


Ranking of the best performing Intel NVMe RAID drivers for my Z170 System:
  1. Intel RST drivers v17.2.6.1027
  2. Intel RST drivers v15.9.6.1044
  3. Intel RST drivers v17.2.11.1033

Compared with my previous tests done in January I got now
a) generally better scores (probably due to the meanwhile well developed Win10 Build) and
b) as winner the Intel RST v17 platform v17.2.6.1044 RAID driver.

EDIT at 06/19/2019: Since I forgot to enter the results I got by using the currently latest Intel RST RAID driver v17.2.11.1033, I have added them today. It is interesting, that this latest driver is obviously less performant than the previously released v17.2.11.1027 driver, which belongs to the same development branch v17.2. So the added results didn't fundamentally change the performance ranking.

@Fernando ,
Very interesting test results.
On my side I have sell now my HPT NVMe RAID x16 4xM.2 PCIe V3.0 add-in card for non usage at home.
So, now I have only the opportunity to test SATA6G RAID0 configurations with Intel v17.2.11.1033 drivers handeling some old SAMSUNG SSD 850PRO and 840PRO devices on X99 under v19H1 Build 18362.175.
ANVIL Test results are quite good and stable with the more recent Intel drivers:



Under Z390 PC I do use non RAID NVMe devices configuration and also a SATA6G 850EVO device:




May be my test results are at the wrong place.

Due to the fact, that I currently have access to my new AMD X570 chipset system and the still working Intel Z170 system, I took the opportunity to compare an Intel NVMe RAID0 array with an AMD NVMe RAID0 array by using the exactly same RAID0 array members (2x250 GB Samsung 960 EVO NVMe SSDs). Onto the RAID0 array of both systems I did a clean install of Win10 x64 v2004 and made some benchmark tests.
Here are the results (left pictures: Intel RAID0, right pictures: AMD RAID0):





Just for comparison purposes, here are the benchmark results I got today with my new AMD X570 system running Win10 x64 v2004 on a single 1 TB Sabrent Rocket 4.0 NVMe SSD with the generic MS in-box NVMe driver (left pictures) and the mod+signed generic Samsung NVMe driver v3.3.0.2003 (right pictures):