It seems that a bootable nvme fakeraid may be possible for dell desktops but only intel drives
but then solgen(new owners of optane/enterprise ssd)doc says its not supported-I was jumping back and forth with a z370 and z490 with a hybrid optane H10 drive where I got some popup during install stating it was going to make the bios change for me but I cant remember if it was a oem or official dell as Id tried so many with ultimately optane app taking over and while the docs says you can choose a raid or optane, nope optane gets greedy and the optane app cant be completely removed-also during the flypy install two choices popped up-a nvme controller option I missed scanning the .inf just under rts premium controller which gave me the idea of hardware jacking the sku to use any drive intel be dammed
I dont know why I just wrote all of that when last night after chasing down the legend of surface pro devices that have 2 drives with custom storage spaces combining them upon boot-essentially a bootable software raid0someone here located a dxe driver and another located a module with multiple references all inserted flashed and no luck-so digging a little more I located an official microsoft IT tool that is disguised as a nuke an pave tool when it actually is combining the drives during that process-I have another tool that supposedly runs a program virtually documentment the instances, dll location, all that jazz-If we can just disable hardware check it should do its thing which is looking for disk 0 and disk 1-thats it passed off and a santizing tool lol-I had already gotten my revenge when I turned a $8 optane drive(pcie 3 x2 drive disguised as a sata ssd) into a cache drive using intels very own cache software! LOL-anyway I need some help or advice to pull this off-not exactly bios editing but some of you gurus worst guess is better than room full of smart people-outside the box is needed on this one-
if we patch the tool then let it deal with encryption/RSA, etc
idendtifiers being SKU# for sure but good chance something else distinguishing surface models that are supported-many official microsoft references of using win10 1703 to make the repairs and a matching driver/firmware package -later references state any win10 after but Im not buying it
after now needing dentures I was able to locate both a clean 1703, the firm package and the IT tool
before my backlink declined connection-(a little trick I picked up from back in the day-a corporate email can take you a long way sometimes)this outside my skillset of course but well within this community
expertise-if I see soem interest I will zip everything up and flesh out my discovery in a more cohesive form with the assebled aforementioned tools
on a quick side note does anybody know how,what and where to insert nvme driver for lenovo insyde?
Im aware of the rsa and have a possible workaround-I tore apart a few bios and found combo nvme smm-dxe drivers in several locations and halted-something jogged about the dxe core is that it spools hands it off from within a capsule-I see references to capsule in the ini but I can find no .cap files
gotta go Im busy for my next trick the worlds fastest deskmeet b660
thanks for reading my book and Im sorry fernando this such a mess-feel free to work your magic on this sloppy post
Edit by Fernando: Thread title customized
@keneglinton1
The sense of your long post is rather unclear for me. What do you want to know or to get from our side?
Do you have a problem with an NVMe Storage Driver or a problem with the NVMe BIOS module(s) or the related BIOS settings?
Please give us a short, but preferably precise summary of your request.
By the way - you don’t need NVMe SSDs, which were manufactured by Intel or its successor Solidigm, to be able to create an Intel RST RAID0 array consisting of NVMe SSDs. It will work with all kind of NVMe SSDs.
1 Like
- I respectfully disagree at least of the platform used-I have been unable to create a m.2 m key (Not sata ssd)nvme rts bootable or otherwise raid on alienware aurora R7, R8, R10 and 2 separate R12-closest I got was with B660 with VMD enabled for H10 optane hybrid drive that in the end optane software replaced rts gui and the optane fusion took place instead of choice to raid-further research reveals certain mobos can do it but of those only intel ssds allowed-also some vers allow only pch attached and others PCIe only attached-supercomputer needed to cross compile all standard and OEM versions(or @fernando could do it in a few days) revisions show they allowed non intel for a bit and then newest solgen docs state they dont support bootable nvme raid period-only VROC with it more than obvious intel has abanded rts (not rtse) as even the 11th gen will not be getting any more rts versions/support-
my question simplified would be
"could we use add hardware method or right click install .inf either the nvme client driver or nvme controller driver onto a non-intel/dell oem nvme ssd fooling rts to rule out the "intel only ssd errata for alienware aurora? I suspect dells trickery at hand because I would normally agree with you(Ive used raid for over 30 years)Im sure its a dell thing and Im over it
2.certain surface pro laptops are able to software raid0 2 512g drives and make it boot-I have found the repair tools for when a customer replaces their drives and breaks the stripes this tool is used to recombine drives restoring bootability-another thread here discusses injection of uefi driver extracted from surface pro laptop-didnt work but other attempts reveal multiple references
a stable bootable software raid0 without hardware and fakeraid is the holy grail and thought impossible-in addition only win10 1703 and corresponding firmware package allows the reformation
so figuring out a tool was involved and then locating said deprecated tool AND firmware is legendary but meaningless without help to finish or hand off to
Im sure this info should be added to that other thread or started anew-
again my apologies @fernando-Im doing this from a phone linked thru my puter
please kind sir arrange and move this to your respectable level of standards
@keneglinton1
Here is my comment:
- As you can see >here<, I had no problem to create an Intel RAID0 array consisting of 2 Samsung NVMe SSDs on my Z170 Chipset system. If you want to know how to do it, look >this< video (for “normal” Intel RST RAID0 Arrays) or >this< one (for modern Intel VMD/VROC RAID0 Arrays).
- According to my knowledge a “Software RAID” is not bootable (the BIOS detects the specific track0 entries of an Intel RAID Array, but doesn’t see any software).
- The creation of a RAID0 Array consisting of 2 or more HDDs was very useful in former times for users, who wanted a better PC performance, but according to my own experience it doesn’t make much sense to do the daily work on a PC, which boots off an NVMe RAID0 array (the performance benefit compared to a not RAIDed NVMe SSD is very limited and not worth the risks of a failing boot procedure).
Edit: I have moved this thread into the better matching “Intel AHCI/RAID Drivers” section and gave it a shorter, but hopefully meaningful title.
1 Like
again @Fernando I would normally agree with you but if you can show me what Im missing with the aurora platforms(z370/z490) I will happily fall on my sword-the R12 is goofy as it has 2 different RTS official intel packages for z490(just for 11th gen and 11th-13th gen package)-dell oem brings forth a third package and R12 does not have VMD capability nor does the achi/raid options allow a change to RTS premium(cant remember what new selection supposed to revert to)and all docs ive read state it doesnt work on raid choice as it is the sata controller-the control +i entry rts bios enetry method only sees the one m.2 drive and only in m.2 slot (which is pcie 3.0 x4 not pcie 4.0 its capable of-this do to dells trickery making a z490 work when processor needs z590 for true abilities to be used)thats why my question was if forcing a nvme client or controller driver on non-dell nvme in pcie slot maybe the driver will allow onboard bios to “see” the drive-onboard RTS is 17.1 so it fairly new
I was going to buy an intel nvme to test this further but im over it-I have 2 different hardware solutions and a special low level ramdisk driver the I combine 2 ram drives into the array that Ive figured out persistance for-Im getting 13,800mb/s speeds and its bootable-whole thing cost $400-not bad updrade into pcie 5.0 speed catagory-for $400 bucks-some assembly required-my boot time is lighting and day to day is noticably faster though technically hardware backed raid but affordable
prior to I was running acer predator getting 7,200mb/s so people saying you will see no difference are people still on pcie 3.0 spewing jealous hater non sense
Im well aware of your past accomphlishments but I notice you shy away from anything you havent actually done yourself-you clearly are a masterful problem solver but I have to admit Ive been a little disapointed as every post Ive made has nettted zero suggestions/theorys/shot in the dark-as mentioned before you and some members wildest guess would be better than an answer from someone proficient in this field-perhaps these things dont interest you but as amazing as I am (Ive been hacking/cracking/moddin since commodore64 crackin games for lunch money)
my motivation has always been to create an alternative to intels monopoly and put repair and upgradabilty back into peoples hands with bios mastering only part of it
I thought everybody heard of the unicorn that is a microsoft storage spaces being used to combine 2 nvme ssd into a raid0 during boot in select surface pros-most info on this has been scrubbed even from official revisions-bootable software raid0 have been possible for years on linux and with the surface pro abilty to do this right under our noses-there is also a way to create a selfsigned option rom but what to put inside it I have no idea-that is for bios gurus not me
I think have all the tools needed and it is going to be a uefi driver injection solution if I cant get the software to apply to non surface drives-driver injection and driver moddification? That sounds right up your ally @fernando-having these tools makes this not only possible but very feasable-without them I can understand it would be a waste of time
now if that doesnt interest you or anyone else in this community to try to solve what they say cannot be solved will be a dark day indeed
I already hear crickets-@lostinbios would understand
thank you for your reply
@keneglinton1
The longer your posts are, the less I can keep hold of their content - please consider my age (I was born within the second world war).
If the new thread title should have been your main question, I have already given you my short answer: “Yes!”
By the way - a BIOS modification is neither required nor makes any sense in your case.
Yes, that is a superb idea - ask Lost_N_BIOS for help (don’t forget to share his eMail address). He is (or was) much younger than me.
Good luck!
1 Like
Your Question and brief content its not understandable, but I can understand your frustration. HERE IS, Yes it can, but you don’t need it for NVMe, RST support SATA drives but it can work fine with NVMe under SATA Configuration, Hence Your drive not shows under the Nvme configuration, bcoz RST read it as SATA. simply do that trick and unlock full potention of your NVMe, in Bios just disable the vmd controller, now you don’t need Intel RST when installing window and now bios can recognize your NVMe under the Nvme configuration and all other bios storage or PCEI options. Nvme should be use under AHCI controller BCOZ its right one for NVMe not RST OR RAID. BTW im using ASUS TUF GAMING Z690 WIFI d4 PLUS and NVMe is SAMSUNG 980 1 tb. I would send you bios picture where it located, if this answer serves you right.
@Loganhit
Welcome to the Win-RAID Forum!
The problem is, that keneglinton1 will not be able to create and to boot off an Intel RAID0 array once the VMD Controller has been disabled.
Regards
Dieter (alias Fernando)
1 Like
OH! , That’s the case. Then question is not valid, VMD Controller has to be enable to create RAID Configurations. if he is trying to create hardware RAID (in bios), without VMD Controller support , it should be like making semiconductors without silicon or other things without its basic materials. I think that’s the right answer. “This is the way” . What you think Sir @Fernando.
@Loganhit
I totally agree with you.
1 Like
I appreciate your support Sir (Dieter) @Fernando . Have a very nice day and blessed life Sir (Dieter) @Fernando.
I am not understanding why @Loganhit nor @Fernando think this can be done on an dell alienware aurora R7,R8,R12 thats does NOT have VMD with any path to rst other than sata hardwired to the pch
since they only have one m.2 slot it is used for optane regarding rst functions-so even pcie lane raid is a nogo as let me repeat myself once again nvme raid is NOT supported on dell alienware aurora evidenced by multiple threads on dell comminty forum and confirmed by intel parting statement that solgen now owns and runs intel nvme drives
you guys act like you can go into bios and flip a switch even a child can do-Have you ever looked at alienware bios settings? very limited selections and setting up optane was a nightmaren and these things do not have pcie bifurcation and I already bricked the R7 barely recovering with hacked bios forcing bios guard to regen the bios (dells nanny software also reinstalled windows overtop mine wth everything in chinese as nobody has mentioned on this forum that its impossible to crack dell bios (other than lostinbios nvram edit which I dont think he ended up trying anyway)
in fact the R12 which has pcie 4.0 will not run nvme on its native m.2 slot at pcie 4 speeds
I also dont understand the sarcasm when stating lostinbios would understand my plight and the reply saying sure go get his help when I didt say lost would help me just that he would understand plus everyone knows lostinbios is well lost and unreachable and I could understand that sarcasm if i had threatened to get his help over yours which I did not
I doubt you could take the same sarcasm but that kind of gloating only works if you provided an actual solution and only adds to my frustration
anyway I opted for 2 dual pcie 3.0 m.2 carriers with switches on one machine and 9500-8i pcie 4 HBA card connected to a m.2 dual carrier via slimas-8i
neither support raid let alone bootable raid but they do support linux software raid
so thru magic and sorcery I am now able to boot win10 off a linux software raid0 array with 12,800gb/s and 13,200gb/s respectfully entering pcie 5 speed territory out of these crappy alienware machines-Im sorry these are run on sentences but everyone else seems to understand how I write in texting format
thats because Im lazy and Im using a phone via text lol
rumors are you could use windows disk management to make bootable raid5 but only with server editions so I am still hoping for a solution as the 9500-8i was kinda pricy
thanks in advance for any helpful suggestions
@keneglinton1
As I already have written, I cannot help you.
If you should ever find a solution, please post it into this thread.
My general advice: Before buying an expensive PCIe adapter card it is a good idea to check whether it will be usable with your in-use PC.
Edit: Please change the misleading thread title. I gave you already the answer “Yes, non-Intel NVMe SSDs can be members of a bootable Intel RAID array”, but this obviously doesn’t solve your problem.
Oh boy! you really got through a lot. First i apologise If offended you. let me give some time to check or its possible or not on your system, Well im pretty tight on spare time but i’ll do my best.
I updated your misleading title that you changed for me but only as its solved
and aurora machines do not have vmd controller with onboard rst
only seeing sata ssd in raid mode but Im sure you will gloss over that and still
insist it will work siting a z170 as examp when aurora models are z730, z490
(im sure I can expect a reply saying z370 and z490 can do raid however)
I normally dont do this but I will package my findings including official intel docs
adding to your in serious need of updating personal knowledge base
you must be fun at parties never admiting you dont know something or classic fernando
doubling down on an incorrect stance
I thank you for that because now normally humbled ken gets to be arrogant
ken by claiming the unchallenged champion of raid sorchery title and throne
by being the first and only (that Im aware of) to boot a uefi windows install of a linux raid 0 array
by the way of course I checked on compatibility with the one of the cards
it runs at half speed (ill fix that later) and boots super slow
the HBA 9500-8i pcie card boots quick and with my success of I think Ill call win2raid
and is the cheapest ($500) pcie 4 x 8 bootable raid0 solution (taking up only one pcie slot)
with the other solution being 2 pcie 3 x8 dual m.2 cards so 4 m.2 drives total achieving roughly same speed 12,800gt/s but uses 2 slots leaving only a x4 slot for gpu updating aurora machines to their true potential for a reasonble cost
until I figure out bifurcation for dell comps or surface pro bootable raid0 the precursor to storage spaces direct developed for win8.1(outside my reach for now)
pcie 5 speed aint too bad amigo but I appreciate the gaslighting
and the friendly banter
Ill post a tutorial once I clean it up that is in a better format and easier to follow
complete with footnotes by third party confirming pcie nvme raid0 bootable or otherwise is NOT possible on aurora class desktops with giving credit where its due
with winraid at the top of the list as I could not have done it without
gleaning its info or meatwars spicy attitude
you stick in the muds rock!
@keneglinton1
It is great, that you finally succeeded with your project and are willing to write a detailed Guide about the procedure.
Off Topic: Please keep in mind, that this is a Support Forum, where Windows users may get help regarding their PC problems, but not a place for any personal ratings about the kindness/knowledge/character/behavior of certain Forum members.