Corsair: An In-memory Computing Chiplet Architecture for Inference-time Compute Acceleration

By Satyam Srivastava; Akhil Arunkumar; Nithesh Kurella; Amrit Panda; Gaurav Jain; Purushotham Kamath
d-Matrix Corporation

Abstract:

Advances in Generative AI (GenAI) have reinvigorated research into novel computing architectures such as Transformer. Transformer, characterized by low arithmetic intensity during most of the inference time, has become the cornerstone of GenAI underlying Large Language (LLM) and Reasoning Models (RM). Numerous solutions to the intense memory bandwidth problem have been proposed. Corsair is an architecture that targets this need using chiplet design, digital in-memory computing-based matrix engine, efficient die-to-die interconnects, block floating point numerics, and large high-bandwidth on-chip memories. We describe the Corsair chiplet, scaling approaches to compose larger systems, and outline the software stack. We formulate the inference-time requirements of LLM and RM computation, memory bandwidth, memory capacity, and interconnect efficiency for scaling. We also show how Corsair design perfectly fits these workloads. We present benchmark results from Corsair silicon that correlate strongly with the design and preview an estimate of workload-level improvements expected with Corsair.

To read the full article, click here