Alphawave Semi Selected for AI Innovation Research Grant from UK Government’s Advanced Research + Invention Agency

UK's ARIA creates £50 million fund ($65 m) to improve and scale AI performance cost effectively

LONDON, United Kingdom, and TORONTO, Canada – October 31, 2024 - Alphawave Semi (LSE: AWE), a global leader in high-speed connectivity and compute silicon for the world’s technology infrastructure, has been awarded a research grant from the UK's Advanced Research + Invention Agency (ARIA), to remove networking bottlenecks that are limiting AI's growth.

Data from Meta[1]  indicates that AI processors can be idle roughly two-fifths of the time waiting for data from the network, wasting costly resources.

To address this and similar challenges that hinder cost-effective AI scaling, ARIA is driving a shift toward more efficient processing systems capable of handling complex data. It has awarded £50 million (USD 65 million) to 12 organizations, including Alphawave Semi, to focus on advancing networking and interconnect technologies.

"It’s clear that new connectivity solutions are essential to support sustainable AI scaling. Innovation is needed throughout the technology stack, from 224Gbps to 448Gbps component technologies to networking, compute architectures, and algorithm development," said Tony Chan Carusone, Chief Technology Officer, Alphawave Semi. "We'll be developing cutting-edge technologies that enable significantly higher interconnectivity within clusters of AI accelerators. We’re proud to leverage our leadership and established partnerships in chiplet-based connectivity for this project. Our solutions will impact the future of AI over the long term."

Alphawave Semi recently announced it taped out the first multi-protocol I/O connectivity chiplet for high performance compute, and developed the first 3 nm IP for UCIe, the open chiplet communication protocol, in partnership with TSMC. UCIe and Chiplet-related partnerships are also in place with other companies, including Samsung and Arm.

Other projects in ARIA’s bid to alleviate dependence on leading-edge chip manufacturing include software simulators to map whole-system performance, cost and power requirements, as well as advances in computational primitives.

For more information on Alphawave Semi visit https://awavesemi.com.

To learn more about ARIA's Scaling Compute programme, please visit: www.aria.org.uk/scaling-compute.

To learn more about the other ARIA R&D Creators working in this space, read ARIA's substack: https://substack.com/home/post/p-150703459

 

Notes to editors

[1] – In a 2022 OCP keynote entitled Infrastructure for Large Scale AI: “Empowering Open”, Meta’s Alexis Bjorlin stated that for its M4 benchmark model, the time taken to transfer between processors was 37.67%. This increased to 57.24% spent in networking for its M2 benchmark model. Alex Bjorlin cited the level of model parallel computations done at Meta and stated, “the balance is skewed towards network I/O… [and there is a] need for very high injection bandwidth and very high bisectional bandwidth.”

 

About Alphawave Semi

Alphawave Semi is a global leader in high-speed connectivity and compute silicon for the world's technology infrastructure. Faced with the exponential growth of data, Alphawave Semi's technology services a critical need: enabling data to travel faster, more reliably, and with higher performance at lower power. We are a vertically integrated semiconductor company, and our IP, custom silicon, and connectivity products are deployed by global tier-one customers in data centers, compute, networking, AI, 5G, autonomous vehicles, and storage. Founded in 2017 by an expert technical team with a proven track record in licensing semiconductor IP, our mission is to accelerate the critical data infrastructure at the heart of our digital world. To find out more about Alphawave Semi, visit: awavesemi.com.