Supermicro LSI integrated 3008 (IR ) mode latest BIOS; firmware, and EFI module = more than 60 second boot ?

Just really wanted to see if anyone in this community could explain why with my Supermicro LSI integrated 3008 (IR ) mode on my Asrock Extreme11 x99 latest Controller Load BIOS (08.29.01.00) ; firmware (12.00.02.00), and EFI module(14.00.00.00)*((stock EFI-BSD was empty !!!)… Mei (9.1.37.1002), and BIOS (3.20) also updated. I tried deleting the array after update and then reinstalling windows again. Clearing bios and even security keys. Alas it still takes more than 90 seconds to boot from off. Which is odd considering it hangs detecting PCIe devices (50 seconds !) (upon debug code lookup) during boot sequence despite all these things it is exactly the same speed as the OROM … why ?

Also for some odd reason although possibly related I cannot enable ECC raid and / or boot with C6 enabled (all others are available mind you.)

Sincerely love this community…
… THANK you Fernando

PS: is there a thread for my controller, as I can’t find it … blush

Sincerely
Sean Sauvé

What happens, when you boot off any other disk drive?
Have you contacted the ASRock and/or the Supermicro Support?

With LSI disabled it boots at the speed I am used to like 25 seconds+. Which is nothing to write home about my friends old x79 RIVE could fastboot in under 10 seconds. In my case Ultra Fast boot is never fast enough to be (too fast) heck its just too slow and annoying being that I went through painstaking effort to get efi and windows reloaded.

Frankly sometimes this Asrock board just seems really watered down after all this. The UEFI bios is so minimal, Fernando (I am sure you would understand what happened I don’t) … one time the mei went into some failsafe when I booted in UEFI settings and several new menus with a overclocker/tweakers treasure trove opened up. There were so many menus my jaw dropped! Many of which I was like YES I new I my board could adjust those settings, but alas one boot later it was gone. I felt like I was in the root / dev menu but even the items were labeled / catagorized and helpful text was added to each entry… This couldn’t be a mistake. If i could get into this menu again perhaps I could adjust timeouts ? or something but honestly my UEFI has lost 10 features each update… Why ? I don’t understand something is wrong.

Actually come to think of it I was going to stay with LSI disabled but have been really disappointed with my drive performance on RAID 0 with Intel or LSI… Somehow dispite all my reading, effort and attempts. I am receiving REALLY poor raid speeds same even the samsung diagnostic tool for every single performance index including Iops are WAYyyy off… Sub 1 GB transfer rates from 6 x Samsung 850 to 2 x Samsung NVMe !!! Why did I buy this power house its so slow comparatively speaking to my x79 and for the life of me I can’t figure out why.

Booting from another drive with the same settings result in the same. I have contacted ASRock and have an RMA presently underway for other issues with the board perhaps this will fix the problem?

@futiless :

I wish you good luck!

Off Topic:

Please shorten your signature. It takes too much place. 2-3 lines for the most important infos (mainboard, chipset, storage drives, SATA mode and OS) are enough for this Forum.

you could try with the "[-- 6 x Samsung 850 Pro, in external enclosure] long Sata cables " Unplugged
how did you installed the 2x1080 ? i would try slot 2 [ near cpu ] and slot 5 [ far away from cpu ]
you have a lot of drivers to load so 90 seconds is not so long , if i were you i would try to make it as long as possible : with ram test…so the mainboard gets in temperatures slowly…
with next win10 or win11 , your mobo will be fine

Let me just say you might be on to something. Although can I ask why does the manufacturer always indicate 1 - Single Slot, (PCIE1) : 2 - Two Graphics in SLI PCIE1 and PCIE4. 3-way (PCIE1,2 and 4)… This is just from the manual. Though use of AIDA64 indicates the the DMI System Slot1 is Usage “Empty”, Slot2 “Empty” and Slot3 In Use x16 Long but it isn’t until you get to Slot6 and Slot7 that they are operating in x8 long.

Which brings in a slew of questions no where outlined for me exactly. First of all I get why the x8 split on Slot 6 and 7. That would be for the LSI controller, as only physical slots 1 and 4 are occupied. Also this makes sense when considering there are 3 devices. OK … but why 7 slots? and why Slot one and 2 empty ? Why does windows see the second physical card as the first for SLI and in windows? Which seems inverse to logic.

But I also should point out that with LSI disable and the Intel raid 0 onboard it essential keeps them crossed and brings slot 6 or 7 to 16 and disabling the other (forget which one gets disabled either 6 or 7).

SO why is it that you recommend adverse pcie card slot layout than the manufacturer. I mean you are probably utilizing logic its just why the discrepancy from the standard?

The reason I had such a long signature is so people could consider all the various nuances of this build as many times I am shocked with what in the end is to blame for the matters at hand.

What did you mean next Win10 / win11 I am using Win 10 now.

Also getting temp slowly ? What is meant by this, as in the readings take time to be accurate ?

the plx 8747 bridge creates two 16x connectors but is linked to 1 16x or 1 8x.
i think you can use one or the other of the two slots without problem .



where i would plug the nvidias is in the slot below the M2 connectors , may be slot 1 is near the cpu or at other side .

chipset has a number of lines so plugging a lot of hardware means sharing them .
as you use only two 16 x slots , may be you could use a mainboard that has less 16x ports ; something with 2 slots at 16x plus one at 8x or 4x for a scsi/sas/sata raid cards
in this review they have one asrock http://www.tomshardware.com/reviews/best…rds,3984-2.html
or this one http://www.gigabyte.com/products/product…spx?pid=5942#ov

for next win10 or win 11 : next M$ os , made with drivers for x99 chipset boards .

temp like temperatures , i mean that you need the board to be at the right temperatures to get its best

You are totally right except that when LSI controller is on the bottom half is all dropped to 8x and LSI @ 8x as well. I can’t understand why my old x79 RIVE board had a far more extensive UEFI. Performed much better, Could do RAID fast boot in 20 seconds + and I am getting USB 3.0 rates of like 35 MB/s … It only detects in a couple different slots intermittantly… and I have updated EVERYTHING !!!.. I will try you slot arrangement as it’s always been odd how the 2nd slot is seen as 1st … anyways im so frustrated want my x79 back and this this has cost a fortune.

Does anyone know if I can get at more settings if I unprotect my MEI by using EEPROM programmer?

i found few things about your mobo …
first you should disable the lsi controllerbecause your drives are not compatible with it .

here are the links to read :
http://www.storagereview.com/supermicro_…3008_hba_review
http://www.supermicro.com.tw/products/ac…-S3008L-L8i.cfm

this list has tested and validated hdrives for the lsi 3008 controller
http://www.supermicro.com.tw/support/res…&sz=3.5&ctrl=61

there is no samsung.

you have to read the page 27 of the manual ftp://europe.asrock.com/Manual/X99%20Extreme11.pdf to understand how to connect the ssds to the mainboards
here is a copy of the rules :


Serial ATA3 Connectors
(S _ SATA3_0_1:
see p.12, No. 12)
(S_SATA3_2_3:
see p.12, No. 13)
(SATA3_0_ 3:
see p.12, No. 14)
(SATA3_1_4:
see p.12, No. 15)
(SATA3_ 2 _ 5:
see p.12, No. 16)

These ten SATA3 connectors support SATA data cables for internal storage devices with up to 6.0 Gb/s data transfer rate.
If the eSATA port (ESATA1) on the rear I/O has been connected, the internal S_SATA3_3 will not function. If the eSATA port (ESATA2) on the rear I/O has been connected, the internal S_SATA3_1 will not function.
If the Ultra M.2 Socket (M2_1) has been occupied, the internal S_SATA3_2 will not function. If the Ultra M.2 Socket (M2_2) has been occupied, the internal S_SATA3_0 will not function.
RAID is supported on SATA3_0 ~ SATA3_5 ports only.

with 10 connectors that should make the 2 x M2 + 4 ssd in raid and 2 non raid + the two esata ports connected
the ssd on ports from SATA3_1 SATA3_3 SATA3_4 SATA3_5 , that should be ok to have them in raid …
have fun plugging it to the best ;’]

EDIT by Fernando: I have put the ASRock rules into a “spoiler” (to save space) and combined the cut sentences to complete ones (for a better readability)

my mistake , i checked again the list …and it has tabs and one for ssd…but yours are not there either…
all drives ssd from all manufacturers starts with “HDS-2” . hope that helps

edit : may be you should apply an update to your win10 : if read the manual for raid with your board ftp://europe.asrock.com/Manual/RAID/X99%…e11/English.pdf
it has at end page 20 :

If you install Windows ® 10 64-bit / 10 64-bit / 8.1 64-bit / 8 64-bit / 7 64-bit
on a large hard disk (ex. Disk volume > 2TB), it may take more time to boot
into Windows ® or install driver/utilities. If you encounter this problem, you
will need to follow the instructions below to ix this problem.
Windows ® 7 64-bit / 8 64-bit / 8.1 64-bit / 10 64-bit:
A. Please request the hotix KB2505454 through this link:
http://support.microsoft.com/kb/2505454/
B. After installing Windows ® 7 64-bit / 8 64-bit / 8.1 64-bit / 10 64-bit, install
the hotix kb2505454. (This may take a long time; >30 mins.)
C. Reboot your system. (It may take about 5 minutes to reboot.)
D. Windows ® will install this hotix then reboot by itself.
E. Please start to install motherboard drivers …

may be it will solve your bug either…at least it can not hurt

hrmmm well I really do appreciate and am flattered by the extensive time you have taken here for me today, and at first I did find the layout a little odd but I tackled that months back on day one.

In my case the SAS controller layout vs Intel is like a snake formation vs. left to right. BUT there is also ASmedia controller for esata and 4 s_sata ports. I am aware how use of these port disable M.2 and eSATA.

I have 2 NVMe drives presently, and 6 samsung 850’s… I have had the 850’s working in RAID 0 on the Intel controller. but found the speed of 1.5 GB’s a second peaking at just over 2GB’s fairly bland indeed and the POST times were not at all fast like my old ASUS RIVE board.

Presently I am on RAID 10 on my LSI controller. Having tried RAID 0 and finding that somehow this is quicker on RAID 10 ? I don’t really want raid 10 I have lond term archive drives with images of the working system… I am just really confused why it is so slow to boot and the slow performance in general.

how did you formatted your drives ? exFAT is faster than NTFS and has not the problem of fragmented MFT…and so the data corruption problems when number of files gets high .
plus at the raid creation : what size of clusters did you took ?
like Norbs , i think the psu is the problem : Recommended AHCI/RAID and NVMe Drivers (38)
there is a review of yours : http://www.jonnyguru.com/modules.php?nam…=Story&reid=298

the mainboard should use a lot of power with the cpu that i tried to find but intel has not it https://ark.intel.com/fr/products/series…-Family#@Server
the only one with 18 cores is the 2699…so to say it uses 150 w at full load plus the pcie connector that uses 150 w + ram and M2…plus all hds always on at same time…rail 1 should be limit its max power of 540 w

may be you should try to use more power from rail 2 that you use at half its capacity .
i would power the pci-e power connector of the mainboard plus all off the drives from this rail 2 using adaptors pci-e to molex like this one :
http://www.performance-pcs.com/power-sup…le-sleeved.html

plus others to get some sata female connectors

last advice : do not download from steam at the same time you play ;’]

Formatted NTFS, via windows 10 install from delete drive. 128 KB cluster size.

PS wise I am using individual lines of the 4 RED PCIE 6+2 pin connectors ( so no problem there )

Water pump uses one 6 pin Molex adapter. Another goes to each of the onboard Molex connections. Each time I used a separate 6 pin line. Finally I have all the hard drives in an enclosure that uses ONE molex connection, but these SSD’s what do they really draw anyways. Besides the enclosure manufacturer makes it that way. Though I guess I could try that and cables next to your point. .

Essentially though I can read my overall draw on my voltage stabilizer and it is reading that the system is only drawing between 1.8 - 2.5 Amps. With full load it peaks at 5-6 amps. (from the North American standard of 15. So that means its never even climbing over 800W peak max ever. So it’s really efficient. When I had 3 Rage Fury x’s for one week it was over 10-11 amps !?! It still road out the storm though…

I am interested though because my PS is older if it has issues with the C6 state because of the Haswell - E thing or not. As I would really like to be able to boot my computer without it freezing in the early stages if C6 is enabled.

I am getting a new board tomorrow as I am really fed up lets see how that pans out for me.

nice you did use the pci-e power … i would use the pcie for the pump to also power the hds [ if possible ]
the 128 KB cluster size is a bit to big , i would go 2 x raid0 : 1 with 4 hd with 32k cluster , second with two hd at 16 k and format the drives with exfat with clusters at 4 or 8 ko .
have it nice with your new board