Speeding Down Memory Lane With Custom HBM

Integrating the functionality of the HBM base die into a logic die provides greater flexibility and additional control.

By Faisal Goriawalla, Synopsys
SemiEngineering | March 11, 2025 

With the goal of increasing system performance per watt, the semiconductor industry is always seeking innovative solutions that go beyond the usual approaches of increasing memory capacity and data rates. Over the last decade, the High Bandwidth Memory (HBM) protocol has proven to be a popular choice for data center and high-performance computing (HPC) applications. Even more benefit can be realized as the industry moves toward custom HBM (cHBM), providing system on chip (SoC) designers with the flexibility and control to achieve greater performance or lower power and smaller area depending on their application.

Why HBM is winning

HBM is increasingly used in data centers for AI/ML and compute intensive workloads in demanding applications. Support from all three major vendors means that end customers can have true multi sourcing, although accelerated demand has put pressure on the supply chain. According to a recent Bloomberg Intelligence report, the HBM market is set to grow at an annual rate of 42% from US $4B (2023) to US $130B (2033), driven mainly by AI computing as workloads expand. HBM will occupy more than half of the total DRAM market by 2033.

To read the full article, click here