To GPU or not GPU
Over the past decade, GPUs became fundamental to the progress of artificial intelligence (AI) and machine learning (ML). They are widely used to handle large-scale computations required in AI model training and inference, and are playing an increasingly important role in data centers. The key feature of the GPU is its ability to perform parallel processing efficiently, making it ideal for training machine learning models, which require numerous computations to be carried out simultaneously.
However, as the demand for AI computing increases, the overall efficiency of this technology is being called into question. Industry data suggests that roughly 40% of time is spent in networking between compute chips across a variety of workloads, bottlenecked by limited communication capacity (fig. 1).
The demand for AI applications continues to rise and this connectivity issue comes at the same time as the costs of general-purpose GPUs for specific workloads and their high power consumption are motivating a move away from GPU-centric compute architectures, towards custom silicon and chiplet-based designs. These modular and flexible SoCs enable scalable hardware solutions that can be optimized not only to reduce power consumption, and cost but improve communication bandwidth too.
To read the full article, click here
Related Chiplet
- Direct Chiplet Interface
- HBM3e Advanced-packaging chiplet for all workloads
- UCIe AP based 8-bit 170-Gsps Chiplet Transceiver
- UCIe based 8-bit 48-Gsps Transceiver
- UCIe based 12-bit 12-Gsps Transceiver
Related Blogs
- How to Make Chiplets a Viable Market
- Marvell Joins Imec Automotive Chiplet Initiative to Facilitate Compute SoCs for Super-human Sensing
- Lowering the Barrier to Chiplets
- Arm Ecosystem Collaborates on Standards to Enable a Thriving Chiplet Market