Why UCIe is Key to Connectivity for Next-Gen AI Chiplets
By Letizia Giuliano, VP of IP Products, Alphawave Semi
EETimes (February 6, 2025)
Deploying AI at scale presents enormous challenges, with workloads demanding massive compute power and high-speed communication bandwidth.
Large AI clusters require significant networking infrastructure to handle the data flow between the processors, memory, and storage; without this, the performance of even the most advanced models can be bottlenecked. Data from Meta suggests that approximately 40% of the time that data resides in a data center is wasted, sitting in networking.
In short, connectivity is choking the network, and AI requires dedicated hardware with the maximum possible communication bandwidth.
Deploying AI at scale presents enormous challenges, with workloads demanding massive compute power and high-speed communication bandwidth.
Large AI clusters require significant networking infrastructure to handle the data flow between the processors, memory, and storage; without this, the performance of even the most advanced models can be bottlenecked. Data from Meta suggests that approximately 40% of the time that data resides in a data center is wasted, sitting in networking.
In short, connectivity is choking the network, and AI requires dedicated hardware with the maximum possible communication bandwidth.
The large training workloads of AI create high-bandwidth traffic on the back-end network, and this traffic generally flows in regular patterns and does not require the packet-by-packet handling needed in the front-end network. When things are working properly, they operate with very high levels of activity.
Low latency is critical, as we must have fast access to other resources, and this is enabled by a flat hierarchy. To prevent (expensive) compute being left underutilized, switching also must be non-blocking—it should be noted that the performance of AI networks can be bottlenecked by even one link that has frequent packet losses. Robustness and reliability of the networks are also critical, with the design of the back-end ML network taking this into consideration.
To read the full article, click here
Related Chiplet
- Interconnect Chiplet
- 12nm EURYTION RFK1 - UCIe SP based Ka-Ku Band Chiplet Transceiver
- Bridglets
- Automotive AI Accelerator
- Direct Chiplet Interface
Related News
- InfiniLink Secures $10M Funding from MediaTek, Sukna Ventures, and Egypt Ventures to Develop Integrated Optical Engines for Next-Gen AI Data Centers
- Intel moves to chiplets for automotive AI
- Alphawave Semi Taped-Out Industry Leading 64Gbps UCIe™ IP on TSMC 3nm for the IP Ecosystem, Unleashing Next Generation of AI Chiplet Connectivity
- Keysight EDA and Intel Foundry Collaborate on EMIB-T Silicon Bridge Technology for Next-Generation AI and Data Center Solutions
Latest News
- Alphawave Semi Delivers Cutting-Edge UCIe™ Chiplet IP on TSMC 3DFabric® Platform
- EV Group Highlights Hybrid Bonding, Lithography, and Support for U.S. Semiconductor Onshoring at SEMICON West 2025
- HyperLight Introduces 110 GHz Reference IQ Modulators covering optical O, C and L wavelength bands for 240 GBaud Class Applications
- YES Receives Multiple VeroTherm™ and VeroFlex™ System Orders from Leading Memory Supplier
- Rebellions Raises $250 Million to Advance the Next Generation AI Infrastructure, Backed by Arm and Samsung