PICNIC: Silicon Photonic Interconnected Chiplets with Computational Network and In-memory Computing for LLM Inference Acceleration

By Yue Jiet Chong, Yimin Wang, Zhen Wu and Xuanyao Fong
National University of Singapore, Singapore

Abstract

This paper presents a 3D-stacked chiplets based large language model (LLM) inference accelerator, consisting of non-volatile in-memory-computing processing elements (PEs) and Inter-PE Computational Network (IPCN), interconnected via silicon photonic to effectively address the communication bottlenecks. A LLM mapping scheme was developed to optimize hardware scheduling and workload mapping. Simulation results show it achieves 3.95× speedup and 30× efficiency improvement over the Nvidia A100 before chiplet clustering and power gating scheme (CCPG). Additionally, the system achieves further scalability and efficiency improvement with the implementation of CCPG to accommodate larger models, attaining 57× efficiency improvement over Nvidia H100 at similar throughput.

Index Terms: LLM Inference, Hardware Accelerator, HW-SW Co-design

To read the full article, click here