Chiplet Cloud: Building AI Supercomputers for Serving Large Generative Language Models
By Huwan Peng, Scott Davidson, Richard Shi, Shuaiwen Leon Song, Michael Taylor (University of Washington)
Large language models (LLMs) such as ChatGPT have demonstrated unprecedented capabilities in multiple AI tasks. However, hardware inefficiencies have become a significant factor limiting the democratization of LLMs. We propose Chiplet Cloud, an ASIC supercomputer architecture that optimizes total cost of ownership (TCO) per token for serving generative LLMs. Chiplet Cloud fits all model parameters inside the on-chip SRAMs to eliminate bandwidth limitations while moderating the die size to improve system costs while leveraging software mappings to overcome data communication overhead. We propose a comprehensive design methodology that accurately explores a spectrum of major design trade-offs in the joint space of hardware-software and generates a detailed performance-cost analysis on all valid design points. We evaluate Chiplet Cloud on four popular LLMs. Compared to GPU and TPU, our architecture can achieve up to 94x and 15x improvement in TCO/Token respectively, significantly reducing the cost for realistically serving modern LLMs.
To read the full article, click here
Related Chiplet
- High Performance Droplet
- Interconnect Chiplet
- 12nm EURYTION RFK1 - UCIe SP based Ka-Ku Band Chiplet Transceiver
- Bridglets
- Automotive AI Accelerator
Related Technical Papers
- Chiplet Cloud: Building AI Supercomputers for Serving Large Generative Language Models
- Hecaton: Training and Finetuning Large Language Models with Scalable Chiplet Systems
- A Heterogeneous Chiplet Architecture for Accelerating End-to-End Transformer Models
- A3D-MoE: Acceleration of Large Language Models with Mixture of Experts via 3D Heterogeneous Integration
Latest Technical Papers
- Thermo-mechanical co-design of 2.5D flip-chip packages with silicon and glass interposers via finite element analysis and machine learning
- High-Efficient and Fast-Response Thermal Management by Heterogeneous Integration of Diamond on Interposer-Based 2.5D Chiplets
- HexaMesh: Scaling to Hundreds of Chiplets with an Optimized Chiplet Arrangement
- A physics-constrained and data-driven approach for thermal field inversion in chiplet-based packaging
- Probing the Nanoscale Onset of Plasticity in Electroplated Copper for Hybrid Bonding Structures via Multimodal Atomic Force Microscopy