[Problem] Haswell CPU Systems with 16GB Memory Modules

Edit: sorry. I didn’t describe the context.

My old GE60-16GC laptop has two SO-DIMM slots. This laptop has been working with 2×8GByte DDR3L memory sticks (modules) for quite some time.

As programs become more and more memory-demanding over the years, even 2×8 = 16 GB of total memory turns to be inadequate. On this old laptop, I’ve been frequently seeing >90% of system memory usage for several months. Therefore I’m looking for memory upgrades for it, despite this laptop can be definitely called “obsolete” nowadays.

As all two memory slots are already occupied, I have to replace existing 8GB memory sticks with larger 16GB ones.

However it was not quite clear whether 16GB memory sticks work with 4th gen (Haswell) Intel Core CPUs. I found that there were rumors saying that although Haswell CPUs do not officially supports 16GB memory sticks, this issue may still be just a software-level limitation which roots in the memory initialization code of BIOS, called the MRC (memory reference code).

The rumors said that because 16GB(yte) memory sticks (modules) are made of 8Gbit DRAM chips, and the MRC doesn’t know how to deal with such 8Gbit DRAM chips, as soon as the MRC finds that a memory stick marks itself as being made of such 8Gbit chips in its SPD data, the system hangs (does not boot) .

ASUS did officially release BIOS upgrades to support such 16GB memory sticks, which somehow made this rumor convincing to me - although such BIOS upgrades are limited to several desktop motherboards only (and it’s for IvyBridge-E X79 platform, not Haswell), I didn’t hear ASUS doing the same for mobile platform (laptops).

I decided to validate whether this rumor is factually correct. So far the outcomings is not good - (on my old Haswell laptop) it does seem to be hardware-level limit rather than a software one.


When browsing this old post, I heard that it might not be a hardware limitation that Haswell CPUs cannot take 8Gbit chips (16GB DDR3 module): [OFFER] Var. ASUS Sabertooth X79 BIOSes (NVMe/Bifurc./uCode/OpROM) - #80 by Lost_N_BIOS

I found a link to notebookreview forums. Thanks to the web archive project, such post can still be recovered, and the next page contains two snippets of assembly code.

I can find that two snippets of code in my BIOS as well - althought it’s not 100% exactly the same.

Out of curiosity, I installed one 16GB DDR3L module together with another 8GB one to my GE60-16GC laptop.

The system can boot as usual, and both two modules can be detected by either BIOS or HWINFO etc. RwEverything can read out SPD info of both two modules as well.

However the 16GB module does not actually work: either (bare-metal) memtest86+ or (under Windows) taskmgr.exe etc does not show full 16+8=24GB of memory, there’s just 8GB available.

If I take out the 8GB one, then the system won’t boot with the 16GB one alone, just black screen, and I have to unplug the power supply to forcibly shut it down.

Now I don’t know what exactly is going on - does my experiment reveal that it is indeed a hardware limit?

After all it does not match with the “MRC makes the system hang” rumor when both 16GB and 8GB modules are present. (edit: according to the rumors, the MRC should make the system hang as soon as it detects a 16GB module - however as mentioned above the system didn’t hang with such 16GB+8GB configuration)

I can’t see the corresponding code which execute the “artificially hang when SPD byte #4 is 0x05” logic in quoted assembly code snippet either.

Or maybe it’s still a theoretically unlockable software limitation, which requires a BIOS mod?

Furthermore, I tried to modify the first snippet of code (which contains look up table) to see what would happen.

(edit: the original BIOS code has a cap which limits capacity of individual memory module to not more than 16GB. 16GB module does not hit this hard-coded software cap coded here actually)

If I lower the cap from 16GB to some smaller value like 15GB, then the BIOS won’t detect the 16GB module, but it’s still detectable by HWINFO. (edit: thus, it’s at least verified that this code path is definitely executed)

Download link of my BIOS can be found on MSI official site: https://download.msi.com/bos_exe/nb/E16GCIMS.120.zip

I extracted the module named MemoryInit with UEFITool from it.

ASUS official post about BIOS upgrade for desktop X79 mainboards: Max Your Mem: 128GB DDR3 Support On ASUS & ROG X79 Motherboards | ROG - Republic of Gamers Global

I downloaded two versions and then tried bindiff/diaphora binary comparing. Some minor modification was identified, but I had no idea what that means.

This post mentioned that MSI (probably) did something similar: 16GB SO-DIMMs to upgrade P570WM to 64GB - Clevo - Tech|Inferno Forums

Since X79 is IvyBridge, an even older arch than Haswell, I think probably this means some hope…

Hmmm, MSI did the same modification as what ASUS had done.

I downloaded BIOS for Big-Bang-XPower-II, another X79 desktop mainboard. Comparing v27 to v28, the same code snippet was deleted:

    cmp     al, 4
    jbe     short loc_FFF3BEEA
loc_fff3bebb:
    test    byte ptr [ebp+74h+var_24], 80h
    jz      short loc_FFF3BEEA
loc_fff3bec1:
    push    offset aErrorSupportFo_0; "Error! Support for 8Gb devices has been fused off!\n"
    push    ebx
    push    ebx
    push    ebx
    push    [ebp+74h+var_D]
    push    [ebp+74h+var_9]
    push    [ebp+74h+var_5]
    push    3
    push    edi
    call    sub_FFF51E64
    push    ebx
    push    [ebp+74h+var_D]
    push    [ebp+74h+var_9]
    push    [ebp+74h+var_5]
    push    0
    push    5
    jmp     short loc_FFF3BE70
loc_fff3beea:

And what’s more, MSI didn’t remove the artificial/intentional “support 8Gb devices fused off” warning string!

Sigh. Unfortunately I haven’t found any similar thing on my own laptop’s BIOS up till now…

I now think it’s indeedly an unbreakable hardware limit.


I found that my BIOS did implement some artificial limitation when it is parsing SPD byte #5 (not #4) - however removing this artificial limit means nothing.

When reading the column address count from SPD byte #5, the MRC checks the CPUID as well. If the SPD says that this DIMM module uses 11 or more column address lines, the MRC will then mark the DIMM as invalid if the CPUID does not match 0x40650 (Haswell-ULT).

I modified the jump instructions to remove that limitation - however this didn’t work out as expected. memtest86+ never succeeds, or even worse: the system simply crash and reboot, sometimes it even falls into black screen bootloops.

If I limit the test range to something like 0-128M, 256M-512M etc, memtest86+ will pass - this seemed to be a sign that the 16GB memory space was just broken into many smaller pieces.

However, it’s never confirmed whether those fragmented “seemingly available” memory ranges overlaps with each other. It’s totally possible that, like, if the column address exceeds 2^10 = 1024, it’s possible that it will actually overflow (because the hardware is unable/unwilling to address more than 1024 columns) and then “rewinds back” to point to column zero.

At least my later experiments prove that even if the 16GB memory space is potentially addressable, it’s still not practically usable. Any OS (either Linux or Windows) won’t boot even if “bad” memory address ranges are blacklisted.

The best thing I was able to achieve was modifying the BIOS code to “fake” SPD data to make a 16GB stick looks as if it was a 8GB one, so that the CPU IMC can drive the 16GB module with only half (8GB) of its capacity. Under this “half-capacity driving” scenario everything goes fine, memtest86+ passes/succeeds, Windows/Linux boots and runs without problem, however it’s obviously meaningless because the “8GB per module” limit is still not breakable.


Finally, I tracked this issue down to MAD—Address decoder Channel configuration register. Coreboot’s open-source native RAM init code for Haswell (and IvyBridge) and Intel’s datasheet vol.2, all clearly showed that this register simply accepts capacity per DIMM. It does not seem to be able to specify detailed configuration parameters like row/col address line count.

“The memory address decoder (MAD) does not work correctly with any unsupported capacity” - this explains everything. (and what’s more, not only a value above the maximum supported capacity of 8192MB won’t work. Not only 16384MB, but also 6144MB won’t work either, even if 6144MB does not exceeds the 8192MB limit)

It’s still somehow interesting/weird that why Intel’s MRC (probably modified by MSI as well?) coded artificial limit so that the 11th column address line is usable only if the CPUID matches with Haswell-ULT.

However, there have been forum posts like this one (Chinese) which reported that ThinkPad X250 with i3-4030U still won’t boot with updated BIOS and 16GB module, even if Lenovo claims 16GB support in the BIOS update log. (this “support 16GB” claim probably applies to Broadwell X250 only, I guess)

@segfault_bilibili
To make it easier to understand the topic of this thread, I have customized its title.
If you have a better idea, feel free to change the title again by editing the first post.
Good luck!

A post was merged into an existing topic: [Problem] NVMe SSD not detected by MSI PRO A620M-E

Sorry my English is bad. I failed to describe the issue with clear logic either, probably.

Just now I’ve edited both the title and my post contents in hope of making things somewhat more clear.

In short, I’m trying to install a 16GB memory module to a Haswell laptop, which is not officially supported in Intel’s datasheet.