Samsung unveils higher capacity HBM3E memory for faster AI training and inference

Peter, 27 February 2024

Samsung just announced HBM3E 12H DRAM with advanced TC NCF technology – fans of acronyms must be excited at reading this, but for everyone else here’s what that means. HBM stands for “high bandwidth memory” and it does what it says on the tin.

In October Samsung unveiled HBM3E Shinebolt, an enhanced version of the third generation of HBM that could achieve 9.8Gbps per pin (and 1.2 terabytes per second for the whole package).

Next, 12H. This is simply the number of chips that have been stacked vertically in each module, 12 in this case. This is a way to fit more memory in a module and Samsung has reached 36GB with its 12H case, 50% more than an 8H design. Bandwidth remains at 1.2 terabytes per second, however.

Finally, TC NCF. This stands for Thermal Compression Non-Conductive Film, i.e. the stuff that is layered in between the stacked chips. Samsung has been working on making in thinner, down to 7µm now, so the 12H stack is about the same height as an 8H stack, allowing the same HBM packaging to be used.

An additional benefit of TC NCF brings improved thermal properties to help improve cooling. Even better, the method used in this new HBM3E 12H DRAM also improves yields.

Samsung unveils higher capacity HBM3E memory for faster AI training and inference

What will this memory be used for? Like you need to ask – AI is all the hype these days. To be fair, it is an application that requires a lot of RAM. Last year Nvidia added Samsung to its list of suppliers for high-bandwidth memory and the company builds some insane designs.

The Nvidia H200 Tensor Core GPU has 141GB of HBM3E that runs at a total of 4.8 terabytes per second. This is well beyond what you see on a consumer GPU with GDDR. For example, the RTX 4090 has 24GB of GDDDR6 that runs at just 1 terabyte per second.

Anyway, based on reports, the H200 uses six 24GB HBM3E 8H modules from Micron (total 144GB but only 141GB usable). The same capacity can be achieved with only four 12H modules, alternatively, 216GB capacity can be achieved with six 12H modules.

According to Samsung’s estimates, the extra capacity of its new 12H design will speed up AI training by 34% and will allow inference services to handle “more than 11.5 times” the number of users.

The AI boom will keep accelerators like the H200 in high demand, so it’s lucrative business to be the memory supplier – you can see why companies like Micron, Samsung and SK Hynix want a piece of the pie.

Source


Related

Reader comments

They were not popular because comparing to Nvidia cards they lacked performance. Not memory fault, more of the AMD GPU chipsets itself, that's why HMB never caught up, could be also pricing issues too. GDDR simply is more cheaper and for GPU pro...

  • Anonymous
  • 28 Feb 2024
  • nIr

That and the vega 56/64, used hbm2 memory i think. Im sure they ended up having heat issues i thought with the modules (im trying to remember when my mate got the vega 64, cost £350 at the time in like 2018). Do you believe phones could ever use...

what💀

Popular articles

More

Popular devices

Electric Vehicles

More