No NVMe PCIe-RAID Support on all Mainboards, how can I mod my Bios?

My Z170 Mainboard does not seem to be capable of creating NVME Raids. But I have seen Bioses which are capable of doing it.

Someone knows the modules to be extracted and to be integrated in a Gigabyte Mainboard Bios?

@jan771 - what is your model so I can look at the manual and block layout. For RAID on NVME, both slots need to be connected to same controller (ie CPU, or PCH) and both need RST control enabled for that slot in BIOS.
It wont be possible if one is attached via PCH and one via CPU

Do you know if the NVMe Command Set has also sth like Trim which might be deactivated within a RAID? Or any other argument vs. a Hardware Raid?
You are right, HWInfo shows one drive not under PCH, only the one in the PCIe m2 adapter. But in BIOS it makes no difference, only SATA Drives are shown in Z170 HD3P EZRAID Setup. If someone wants to buy my like-new HD3P, I offer it for 60EUR now. Used it for two weeks only.

@jan771 :
All mainboards with an Intel Z170 chipset should natively support the creation of an Intel RST RAID array consisting of 2 or more NVMe SSDs.
So there is no need for any BIOS modding.

I doubt that and suspect, that you haven’t chosen the required BIOS settings or not installed an Intel RST RAID driver, which supports an NVMe RAID array.
My tip: Do a Google search for them both.

@jan771 - Under Settings >> IO >> SATA & RST >> Set RST Premium and then reboot, and check again

@Fernando - Isn’t a certain RST version required to all NVME RAID? Also, doesn’t what I mentioned apply here, about one NVME connected to PCH and one connected to CPU, or that does not always matter?

@jan771 :
>Here< is the link to a youtube video about how to create an Intel NVMe RAID0 array with an MSI Intel Z170 chipset mainboard.
The related BIOS settings within the “Storage Controller” and “Boot” sections of the BIOS are different with a Gigabyte mainboard, but it should work neverless.
By the way - I had no problem to create an Intel NVMe RAID0 array and to boot off it with my old ASRock Z170 chipset board.

@Lost_N_BIOS :
Yes, an Intel RST RAID driver from v14.8 up is required to be able to boot off an Intel NVMe RAID array (better choice: an RST v15 or v16 platform driver) and all RAID0 array members should be directly connected to the CPU, if the user wants the best possible performance.

I cannot imagine that it matters how the m2 is connect to the CPU, unless they are both adressed the same way via PCIe command. At least one m2 should show up in the RAID Tool then.

Also I do not think that you cannot make a RAID with a v1.3 combined with a v1.2 NVMe.

RST ofc should matter, I have the Intel Rapid Storage Option ROM v.15.2 in my BIOS. Or do you mean the UEFI Driver included in the Bios Update?

Concerning the TRIM command in RAIDS, maybe RAID 0 is not that critical for SSD as other RAID Modes. It all depends on the exact implementation of the RAID System. I found this paper about the topic:

https://www.itmz.uni-rostock.de/storages…aid-trim-tr.pdf

So if you want to buy a m2 RAID Controller you will have to find out if the RAID Option Rom Driver is capable of propagating TRIM requests to the underlying layer of m2 drives first. Otherwise you risk a lower performance with the RAID Set maybe even lower than without RAID.

Ok, a short, brief update:

I tested multiple NVMe m.2. SSD on simple Mainboards Gigabyte z170 HD3P and ASUS h170m Plus.

Both have a weird behaviour: Once you activate RAID in the Bios, the configuration will be insane and stop recognizing all m.2. and boot devices and you can even have boot fails. It does not matter in which PCIe port you put them, it just does not work. In RAID mode only the m.2. in the single m.2. port can be added to a RAID, others are not listed.Also discovered here: https://www.heise.de/forum/c-t/Kommentar…-31975770/show/
Thus, I do not recommend trying a RAID. But now, I discovered an opotion in the ASUS Bios “m.2. RAID Support: RST Controlled/ not RST Controlled” - this option I did not set, so this might be the cause.

I have a 4x m.2. RAID Controller (needs x8x8 or x4x4x4x4 PCIE Bifurcation https://www.asus.com/support/FAQ/1037507) and an SM961 MLC and an WD SN730 TLC 256GB for sale now or an 512GB SN720 MLC. Please contact me in private message if someone is interested.

You could run maybe two m.2. on this kind of mainstream mainboard and run them in a software RAID. But, you cannot install windows on them. Somehow it should be possible unless I read that Windows 8.1 Server can be installed on dynamic volumes, so maybe you can try to find out the drivers and add them to install.wim or alike.

@jan771 :
Since I have own experiences with a bootable Intel NVMe RAID0 array on an Intel Z170 chipset system (look >here<), I want to give a comment:

  1. My test results verify, that it is possible to create a bootable Intel RST NVMe RAID0 array with an Intel Z170 chipset mainboard. So I think, that it should be possible with a GA-Z170-HD3P as well.
  2. I can confirm, that the risk to get an unbootable system is much higher with an Intel NVMe RAID array than with a configuration, where a single NVMe SSD is used as system drive. Futhermore an Intel RAID array needs much more time for the booting procedure. Since only the READ/WRITE processing of big sized files is accelerated by a RAID0 array, the Office user will not even recognize any performance gain while doing his/her normal work. The speed benefit while processing very big sized files is in my eyes not worth the risks of a boot failure, which is usually not repairable by the OS.
  3. It is unlikely for me, that you or any other Forum member will be able to solve your problems by modifying the BIOS.

Well, NVMe has a limited range of advantages. Esp. loading Programs into Memory is faster, but once you run intense E/A operations, like E-Mail compression in Thunderbird, Cleanup or even DISM install.wim a HDD is much faster. A Dism command operated 6hours on an NVMe once on my PC vs. 3hours with a HDD.
When copying 120GB Games Folder from HDD to an NVMe the speed has also been only 50-80MB/s whereas a HDD can achieve 80-150MB/s.
Thus, an NVMe is only good for reading or writing a very short time and very fast. For more it is not useful even in a raid.

@jan771 : After having read your last post I would like to get an answer to the following question: Why did you start this thread?

Well, because I intended to install two m.2 cards as RAID 0 in order to double the TBW, size and speed also for virtual memory?

Hi, this thread is kinda on subject for what I’m trying to do, so I’m posting here instead of starting a new thread. Yell at me if I should do something different, heh. I’m a newb to the forum but I’ve messed with a little lower level stuff over the years, but still kinda new to EFI/BIOS modding/chipsets/etc.

I want to add NVMe RAID support to my BIOS.

Yes, there are a million-and-one ways to boot a RAID array without bios support, but I want to boot directly from bios so I don’t need extraneous drives cluttering my chassis. Plus, I think it’s kinda badass to do with an older mobo. To be clear, I am talking about NVMe drives that use 4 PCIe lanes each, and use cheapo adapter cards to direct connect to bifurcated PCIe slots. I’ve tested, and bifurcation appears to work as expected on my mobo, I’ve tested each NVMe slot on a cheapish 4-drive PCIex16 adapter card and the drive was recognized in each slot (AFTER I finally got the bifurcation setting correct…).

I have a dual proc Supermicro X10DRU-i+ Mobo (info, firmware, etc at www .supermicro. com/en/products/motherboard/X10DRU-i+ ). Stock firmware does NOT support NVMe at all, but I already successfully added basic NVMe support to my BIOS using the guide at:
win-raid: Guide-How-to-get-full-NVMe-support-for-all-Systems-with-an-AMI-UEFI-BIOS.html#msg14810

And still being psyched from getting direct boot of NVMe working on an old-ish server machine, I WANT MORE!!!

I would assume all drives in a specific NVMe RAID array would have to connect to the same CPU, but I really have no idea what limitations would exist. I have 16 PCIe3.0 lanes available on CPU1 and up to 40 lanes on CPU2… Could make for some interesting RAID0 arrays and speed tests.

So… Some Q’s:
- Is there a RAID driver for NVMe M.2’s that I can slip into my firmware as easily as adding the basic NVMe support was?
- Would it be exactly the same installation procedure as the above linked guide, same or different efi volume (or something like that)?
- Any issues I should be aware of due to the dual proc mobo?


Thanks!
(Sorry about links, I’m too new-b to be allowed them in my posts!)

@pipe2null Motherboard has limited NVME support by intel VROC

(UBU)
1 - Disk Controller
EFI Intel VROC for sSATA - 6.2.0.1034
EFI Intel VROC for SATA - 6.2.0.1034
OROM Intel VROC for SATA - 6.2.0.1034
OROM Intel VROC for sSATA - 6.2.0.1034

Compatible drives are restricted:
https://downloadmirror.intel.com/29564/e…-338065-008.pdf

VROC 6.2 user guide
https://downloadmirror.intel.com/29564/e…-338065-008.pdf

Unfortunately RAID is paid option with hardware key and I can’t see that Supermicro has installed a slot for such a key?! (See chapter 3.1 of user guide)

vroc.jpg



Without key it’s ‘passthrough’.

Up- or downgrade is possible as far as one stays same line, here RSTe, but concept will be the same. Switching to RST modules would require changes in bios region, too. These changes aren’t fully known/ explored (as far as I know). There have been added options in bios for configuring the chipset in a way that options can be enabled/ disabled quite differentiated (depending on chipset revisjon), lock for example here:
Help Needed for RAID - Dell Optiplex 3070 - H370 Chipset (2)

@lfb6
Thanks!

So, let me make sure that I understand this, please correct if wrong:

1) My firmware ALREADY supports basic NVMe booting, via Intel VROC
2) My firmware ALREADY supports RAID for NVMe in ADDITION to existing sSATA and SATA channels, via Intel VROC
3) My firmware currently IGNORES/passes-through all forms of NVMe because a specific Key/File is not present
4) If I can find and insert only the "key/file" that Intel VROC is looking for, it is reasonable to believe I could enable NVMe RAID with no other firmware modification and the "NvmExpressDxe_4.ffs" would be redundant/not needed at all…

But RAID is currently supported and "Turned On/Available" for both my sSATA and SATA channels. So, is there a separate/different key for NVMe thans s/SATA or are there separate binaries for the different channels, or… ?

Does this make sense or am I blurting crazy talk? heh.

1.) No, passthrough means just that the module will not ‘occupy’ the disks, but pass them through to the bios/ other modules. As far as I understand it it will not allow booting of non-RAID NVMe. disks.
2.) Yes, according to conditions described
3.) Yes, hardware key not present
4.) Yes, according to conditions described for Intel/ 3rd party disks

Modules are the same, but they’re controlled by hardware key and possibly by (probably hidden) bios settings, the latter depending on chipset generation.

@lfb6
Thanks. I’ve spent the time since my last post doing a little RTFM’ing. SMACKING-MY-OWN-HEAD: “key” is a freaking hardware dongle only usable on a special mobo header… Ok, I’m slowly catching up to the whole vroc thing.

It seems since my mobo with stock firmware does not support booting nvme at all at least not the M.2 variety I own (neither refind nor grub can see it, and both are dependent on EFI/bios services without additional drivers), and there is no physical VROC key header that I can tell, it appears Supermicro just used the VROC efi modules as a convenience for handling RAID for sSATA and SATA for this older mobo? After modding my firmware to include the “NvmExpressDxe_4.ffs” driver, refind and grub can both see my m.2 nvme just fine, and adding boot entries with efibootmgr work just great and I can direct boot my nvme drive no prob.


So, to add direct RAID boot of nvme m.2’s on my mobo, it seems there are 2 approaches:
1) Hack/crack existing VROC efi module to get it “turned on” and do its thing (no other possible way since no hardware “key” header?)
2) Mod my bios to install a new efi RAID driver in addition to the "NvmExpressDxe_4.ffs"

Am I on the right track? If you are aware of anything like either option especially a drop-in nvme raid dxe driver, I’d appreciate pointers in the right direction/resources. I’ve been digging through various threads, but there are years and years and years of 'em. :wink:

Correct so far. RSTe was a marketig thingy for older workstation/ server boards, a feature. Now it’s become a marketing gag to allow/ control features for these RAID abilities very granular in bios settings for PCH- see manual for *370 chipsets. Earlier it was clear: If this module is in the bios you can use all it’s features, now it’s different- see the Dell thread.

So RSTe has the old features all were used to, but they changed the name and you have to pay for the new things you may like. Good thing for MoBo manufacturers/ system seller and for Intel, bad for buyers.

1.) correct, but not such easy possibly
2.) correct, but hard to find, it should work together with bios itself such as you can configure a RAID there and it should have it’s own software so that you can monitor your RAID without rebooting to enter bios setup.

3.) Buy an adapter card that has it’s own RAID bios/ firmware and 2 to xxx NVMe slots…

4.) Fiddling with older versions of RSTe which had NVMe RAID capability without need for a key. These supported a very limited set of (now possibly outdated) Intel NVMe disks, and afaik noone has tried. But maybe you can find somthing on 4.5/&/7 or 5.x tree. But you’r on your own there.

And other question woulde be:
- Do you want RAID just for speed, meaning RAID 0, then every form of fiddling will be great.
- Do you want RAID for increased safety, then fiddling with backward bios mods would be rather contraproductive.

It is actually scam and fraud not "marketing" and comparable to selling a car with a four cylinder engine you can only use two cylinders of. The Engine Power is only calculated on a theoretical basis.

@lfb6

Please correct if wrong:
So basically Intel took software-raid code from linux kernel (Example: https://git.kernel.org/pub/scm/linux/ker…s/md?h=v5.10.15 ) and ported the code to the preboot efi environment, but instead of creating an nvme-compatible raid driver for EDK2 or whatever (Example: https://github.com/tianocore/edk2/tree/m…i/NvmExpressDxe ), they instead crippled the code to only work with select Intel NVMe’s and then included it in previous RSTe versions. Then in 2019 they changed the name from RSTe to VROC and added the “hardware dongle-key” to require extra cash to use even the crippled list of Intel’s nvme’s, and even more cash to use some 3rd party nvme’s but still crippled to only work with specific devices. And now Intel’s new “VMD” feature for their latest enterprise processors that work with VROC is basically hardware-accelerated-software-raid? But VROC will still fully function without VMDs on the CPU, so again, it’s just a software thing.

If that’s more or less correct, then it seems “1) Hack/crack existing VROC efi module…” is not just getting the feature turned on without a mobo header for the hardware key, but to also de-cripple VROC to ignore the whitelist of Intel-approved devices and use any nvme device. The same thing would probably apply to “4.) Fiddling with older versions of RSTe…”.

As for “3.)”: I’ve put a considerable amount of work into my box to try and get the maximum capability out of the old hardware. For example, it was waaay too loud when I got it but with a couple more mods I’ll have my system running fanless, without sacrificing any drive bays or blocking any slots, and on a tight budget. Since my mobo supports bifurcation, I could install up to 16 nvme m.2’s. I’ll never do that, but I would definitely try a small bucket of budget nvme’s in a RAID0 just to try it out. Same thing goes for the 2nd box I bought at the same time, same mobo and proc’s but the smaller pizza box chassis. Emphasis on “tight budget”, thus old (but beefy) hardware. I’m attempting to learn/experiment with clusters and the enterprise side of things, so I’ll be reconfigurring both boxes quit a bit, adding old GPU(s), etc… But all with old enterprise equipment that’s cheap but still beefy enough to be useful. VERY LONG AND RAMBLING STORY SHORT, software-based raid for nvme is plenty good and far more versatile for me so I can’t justify the expense of a hardware raid card. Inserting NvmExpressDxe_4.ffs into my bios image solved one big issue, but I still want to try to get bootable raid0.

For completeness, I suppose I have to add another option: “5.) Port md code from linux kernel source to EDK2 for DXE/PEI”. This might be a bit outside my realm of possibility at the moment.


So “2.) Mod my bios to install a new efi nvme RAID driver…” seems the only reasonable alternative. Are there any projects/resources/hacks being done on this?


Thanks! You’ve been a big help as a wrap my noggin around this whole thing.