A Heterogeneous Chiplet Architecture for Accelerating End-to-End Transformer Models
By Harsh Sharma, Pratyush Dhingra, Janardhan Rao Doppa, Partha Pratim Pande (Washington State University) and Umit Ogras (University of Wisconsin Madison)
Transformers have revolutionized deep learning and generative modeling, enabling unprecedented advancements in natural language processing tasks. However, the size of transformer models is increasing continuously, driven by enhanced capabilities across various deep-learning tasks. This trend of ever-increasing model size has given rise to new challenges in terms of memory and computing requirements. Conventional computing platforms, including GPUs, suffer from suboptimal performance due to the memory demands imposed by models with millions/billions of parameters. The emerging chiplet-based platforms provide a new avenue for compute- and data-intensive machine learning (ML) applications enabled by a Network-on-Interposer (NoI). However, designing suitable hardware accelerators for executing Transformer inference workloads is challenging due to a wide variety of complex computing kernels in the Transformer architecture. In this paper, we leverage chiplet-based heterogeneous integration (HI) to design a high-performance and energy-efficient multi-chiplet platform to accelerate transformer workloads. We demonstrate that the proposed NoI architecture caters to the data access patterns inherent in a transformer model. The optimized placement of the chiplets and the associated NoI links and routers enable superior performance compared to the state-of-the-art hardware accelerators. The proposed NoI-based architecture demonstrates scalability across varying transformer models and improves latency and energy efficiency by up to 22.8x and 5.36x respectively.
To read the full article, click here
Related Chiplet
- DPIQ Tx PICs
- IMDD Tx PICs
- Near-Packaged Optics (NPO) Chiplet Solution
- High Performance Droplet
- Interconnect Chiplet
Related Technical Papers
- Hemlet: A Heterogeneous Compute-in-Memory Chiplet Architecture for Vision Transformers with Group-Level Parallelism
- AuthenTree: A Scalable MPC-Based Distributed Trust Architecture for Chiplet-based Heterogeneous Systems
- AIG-CIM: A Scalable Chiplet Module with Tri-Gear Heterogeneous Compute-in-Memory for Diffusion Acceleration
- Resister: A Resilient Interposer Architecture for Chiplet to Mitigate Timing Side-Channel Attacks
Latest Technical Papers
- Advances in waveguide to waveguide couplers for 3D integrated photonic packaging
- Lifecycle Cost-Effectiveness Modeling for Redundancy-Enhanced Multi-Chiplet Architectures
- DISTIL: A Distributed Spiking Neural Network Accelerator on 2.5D Chiplet Systems
- Multi-Partner Project: COIN-3D -- Collaborative Innovation in 3D VLSI Reliability
- EOTPR Fine Pitch Probing for Die-to-Die Interconnect Failure Analysis