Optimizing Attention on GPUs by Exploiting GPU Architectural NUMA Effects
By Mansi Choudhary 1, Karthik Sangaiah 2, Sonali Singh 2, Muhammad Osama 2, Lisa Wu Wills 3, Ganesh Dasika 2
1 Department of ECE, Duke University, Durham, USA
2 Advanced Micro Devices Inc., Santa Clara, USA
3 Department of Computer Science, Duke University, Durham, USA

Abstract
The rise of disaggregated AI GPUs has exposed a critical bottleneck in large-scale attention workloads: non-uniform memory access (NUMA). As multi-chiplet designs become the norm for scaling compute capabilities, memory latency and bandwidth vary sharply across compute regions, undermining the performance of traditional GPU kernel scheduling strategies that assume uniform memory access. We identify how these NUMA effects distort locality in multi-head attention (MHA) and present Swizzled Head-first Mapping, a spatially-aware scheduling strategy that aligns attention heads with GPU NUMA domains to exploit intra-chiplet cache reuse. On AMD's MI300X architecture, our method achieves up to 50% higher performance over state-of-the-art attention algorithms using conventional scheduling techniques and sustains consistently high L2 cache hit rates of 80-97%. These results demonstrate that NUMA-aware scheduling is now fundamental to achieving full efficiency on next-generation disaggregated GPUs, offering a path forward for scalable AI training and inference.
To read the full article, click here
Related Chiplet
- Interconnect Chiplet
- 12nm EURYTION RFK1 - UCIe SP based Ka-Ku Band Chiplet Transceiver
- Bridglets
- Automotive AI Accelerator
- Direct Chiplet Interface
Related Technical Papers
- Leveraging Chiplet-Locality for Efficient Memory Mapping in Multi-Chip Module GPUs
- The Revolution of Chiplet Technology in Automotive Electronics and Its Impact on the Supply Chain
- SCAR: Scheduling Multi-Model AI Workloads on Heterogeneous Multi-Chiplet Module Accelerators
- Chiplets on Wheels : Review Paper on holistic chiplet solutions for autonomous vehicles
Latest Technical Papers
- Optimizing Attention on GPUs by Exploiting GPU Architectural NUMA Effects
- Inter-chip Clock Network Synthesis on Passive Interposer of 2.5D Chiplet Considering Transmission Line Effect
- Simulation-Driven Evaluation of Chiplet-Based Architectures Using VisualSim
- CHIPSIM: A Co-Simulation Framework for Deep Learning on Chiplet-Based Systems
- Taming the Tail: NoI Topology Synthesis for Mixed DL Workloads on Chiplet-Based Accelerators