Next-Gen AI Architecture Through Co-Packaged Optics

The battle for AI supremacy will be won in the infrastructure layer. As reasoning models demand 100x more inference compute and training clusters scale to millions of XPUs, co-packaged optics (CPO) represents the only viable path forward, bringing proven silicon photonics technology directly into the package to eliminate the bandwidth and latency bottlenecks constraining next-generation AI systems.

But technology readiness is only one piece of the puzzle. In this webinar, industry leaders from Alchip, Astera Labs, and Ayar Labs move beyond proof-of-concept to address the manufacturing, supply chain, and integration realities of commercial CPO deployment.

Join this expert panel as they decode the three-stage silicon photonics roadmap (scale-out → scale-up → extended memory), explore how the scale-out ecosystem is proving out CPO for the higher-stakes scale-up market, and provide specific guidance on deployment timelines from 2026 through 2028.

Discover why technology maturation is ahead of packaging flows and supply chain readiness, how hyperscalers will balance customization with multi-sourcing requirements, and why software-defined telemetry and rack-level integration strategies are as critical as the optical chiplet itself. Learn what barriers must fall before CPO achieves the volumes AI demands and who will cross the finish line first.

Key Takeaways

  1. Architecting XPU clusters that behave like one giant chip across racks 
  2. The three pillars of CPO readiness
  3. Meeting the engineering requirements for 100Tb/s+ XPU-to-XPU connectivity
  4. Solving the power and latency challenges of 100MW+ AI factories
  5. The economic tipping points driving CPO adoption from 2026-2028
  6. Manufacturing at AI scale
  7. Standardization vs. customization paradox