Sarcina Technology launches AI platform to enable cost-effective customizable AI packaging solutions
Palo Alto, CA – 25 March 2025 -- Sarcina Technology, a global semiconductor packaging specialist, is excited to announce the launch of its innovative AI platform to enable advanced AI packaging solutions that can be tailored to meet specific customer requirements. Leveraging ASE’s FOCoS-CL (Fan-Out Chip-on-Substrate-Chip Last) assembly technology, this platform includes an interposer which supports chiplets using UCIe-A for die-to-die interconnects, allowing for the delivery of cost-effective, customizable, cutting-edge solutions.
Sarcina Technology is on a mission to push the boundaries of AI computing system development by providing a unique platform that enables efficient, scalable, configurable and cost-effective semiconductor packaging solutions for AI applications. As AI workloads continue to evolve, there is a need for increasingly sophisticated packaging solutions capable of supporting higher computational demands. Sarcina’s novel interposer packaging technology integrates leading memory solutions with high-efficiency interconnects. Whether prioritizing cost, performance or power-efficiency, Sarcina’s new AI platform can deliver.
According to Dr. Larry Zu, CEO of Sarcina Technology:
“Six years ago, after prototyping a 2.5D silicon TSV interposer package that integrated one ASIC and two HBMs, we predicted this technology would enable highly complex compute solutions. Today, this vision is becoming a reality, driven by RDL die-to-die interconnects like UCIe.”
Zu continues: “With FOCoS assembly technology, we are entering a new era of AI computing. Our AI platform offers greater efficiency and customization, with the lowest cost in the industry for generative AI chips. This ensures that our customers stay competitive in the rapidly evolving AI landscape.”
The Sarcina team has successfully developed an interposer with up to 64 bits of data interface per module, achieving data rates of up to 32 GT/s. This delivers the highest UCIe-A performance in terms of both bandwidth and data rate, as specified by the UCIe 2.0 standard. To further enhance data transfer throughput, multiple modules can be arranged in parallel along the silicon die edge. There is also a choice between LPDDR5X/6 packaged memory chips and HBMs.
Sarcina has extensive expertise in designing high-power, high-performance semiconductor packages. This allows semiconductor startups to focus on developing efficient algorithms for generative AI and edge AI training without the need for an expensive post-silicon design and manufacturing team. Startups can simply develop their silicon and pass it to Sarcina for post-silicon packaging, streamlining the process and reducing costs while maintaining high performance. Sarcina’s die-to-die interposer solution enables AI customers to use chiplets to form large silicon areas, supporting high-performance computing with satisfactory wafer yields. This large package design allows for the integration of more memory, which is crucial for generative AI applications that require rapid, parallel data processing.
Key features of the new Sarcina AI platform:
- Cost-effective chiplet design: A cost-efficient alternative to expensive SoC solutions.
- Fast die-to-die interconnection with UCIe-A standard: Up to 64-bit data interface per module and 32 GT/s transmission speed per lane. Supports multi-module configurations, interconnect redundancy and side-band configurations as specified by UCIe 2.0 standards.
- FOCoS-CL advanced packaging technology: A cost-effective replacement for costly 2.5D TSV (Through-Silicon Via) silicon interposer technology, as well as other expensive solutions like silicon bridge die with fan-out RDL interconnections.
- LPDDR5X/6 and HBM options: Provides superior memory bandwidth and efficiency to support a variety of AI workloads. LPDDR6 memory also incorporates 3D stacking technology similar to HBM, achieving data rates of approximately 10 GT/s.
- Scalable package size: Supports package sizes up to 100 mm x 100 mm, enabling scalability for diverse AI applications.
- Power specifications: <500W for forced air cooling and up to 1000W with liquid cooling, offering flexible deployment options.
- Memory integration: Supports up to 20 LPDDR5X/6 memory chips or up to 8 HBM3E chips, facilitating high-speed data processing for AI workloads.
The launch of Sarcina’s AI platform is set to transform AI computing capabilities across industries such as autonomous systems, data centers and scientific computing.
About Sarcina Technology
Sarcina Technology specializes in advanced semiconductor packaging solutions that drive next-generation AI and computing innovations. Since 2011, the company has been committed to delivering cutting-edge, high-performance packaging solutions tailored to meet specific customer requirements. Sarcina’s mission is to push the boundaries of AI computing system development by providing a unique platform that enables efficient, scalable, configurable and cost-effective semiconductor packaging solutions for AI applications. For more information please visit our website: www.sarcinatech.com
Related Chiplet
- Automotive AI Accelerator
- Direct Chiplet Interface
- HBM3e Advanced-packaging chiplet for all workloads
- UCIe AP based 8-bit 170-Gsps Chiplet Transceiver
- UCIe based 8-bit 48-Gsps Transceiver
Related News
- Advanced Packaging Drives New Memory Solutions for the AI Era
- DreamBig World Leading "MARS" Open Chiplet Platform Enables Scaling of Next Generation Large Language Model (LLM), Generative AI, and Automotive Semiconductor Solutions
- Sarcina Technology Demonstrates Packaging Capabilities at Chiplet Summit
- Tenstorrent Selects Blue Cheetah Chiplet Interconnect IP For Its AI and RISC-V Solutions
Latest News
- HyperLight Launches TFLN Chiplet™ Platform with Scalable 6-Inch Production and 8-Inch Expansion for Next-Gen AI and Photonics Infrastructure
- Intel’s Embarrassment of Riches: Advanced Packaging
- Challenges In Managing Chiplet Resources
- Global Fab Equipment Investment Expected to Reach $110 Billion in 2025
- X-FAB, SMART Photonics and Epiphany Design demonstrate InP-on-Silicon design flow for next-generation optical transceivers at OFC