Arm Viewpoints: Chiplets explained – the technology and economics behind the next wave of silicon innovation

Why modular silicon, open standards and system-level thinking are reshaping the future of compute

Summary

In this episode, they explore:

  • In this episode of the Arm Viewpoints podcast, Brian Fuller sits down with Austin Lyons, founder of Chipstrat and senior analyst at Creative Strategies, to unpack one of the most consequential shifts in semiconductor design today: chiplets. Lyons explains why chiplets are more than a packaging technique — they represent a new design methodology and economic model driven by rising silicon costs, scaling limits, and the need for faster innovation.
  • The conversation explores lessons from early adopters like AMD, the role of open standards such as UCIe, AMBA CHI, and Arm’s Chiplet System Architecture (CSA), and why a true multi-vendor chiplet marketplace is still a work in progress. Lyons also connects chiplets to broader business outcomes, including capital efficiency, risk reduction, and supply-chain resilience, and looks ahead to where chiplet-based systems could unlock new opportunities — from AI infrastructure to robotics.
  • And much, much more.

Speakers

Austin LyonsAustin Lyons

Austin is an analyst focused on semiconductors and physical AI. He writes Chipstrat and cohosts the Semi Doped podcast. His background spans hardware engineering at Intel, full-stack software development at venture-backed startups, autonomous systems product management at John Deere, and nanoelectronics research. He holds an MSEE from UIUC and an MBA from the University of Iowa, combining technical depth with strategic perspective.

Brian Fuller, host

Brian Fuller is an experienced writer, journalist and communications/content marketing strategist specializing in both traditional publishing and emerging digital technologies. He has held various leadership roles, currently as Editor-in-Chief at Arm and formerly at Cadence Design Systems, Inc. Prior to his content-marketing work inside corporations, he was a wire-service reporter and business editor before joining EE Times and spending nearly 20 years there in various roles, including editor-in-chief and publisher.  He holds a B.A. in English from UCLA.

Transcript

[00:00:00] Hello, and welcome to another episode of the Arm Viewpoints podcast. I’m Brian Fuller, editor-in-chief at Arm, and today we’re excited to welcome Austin Lyons, founder and analyst at Chipstrat and Senior Analyst with Creative Technologies, who’s an expert in triplet technology and advanced semiconductor design.

In this episode, we deep dive into chips, not just as a packaging technique, but as a fundamental shift in how silicon is designed, built, and monetized. Austin brings both an engineer’s grounding and a market analyst perspective to a topic that’s quickly becoming central to the future of compute. In our conversation, you’ll hear what chips really are and why they’re as much a design methodology and economic model as they are a technical innovation.

The economic forces pushing the industry toward chips from rising mass costs to the limits of monolithic scaling. Lessons learned from early chip adopters, the biggest barrier standing in the way of a true chip marketplace. [00:01:00] How interconnect technologies and open standards will determine whether the chip ecosystem truly scales and the industry’s most likely to benefit.

Next, from AI infrastructure to automotive and beyond. It’s a thoughtful, practical look at where Chip are today and what it will take to turn promise into platform. Let’s get into our conversation with Austin Lyons.

Brian: Austin Lyons, welcome to our viewpoints. Thanks for joining us. Thank you for having me.

Let’s start off talking about you. Tell us a little bit about yourself and your background and how you became an analyst.

Austin: Yeah, so I am an electrical engineering by training, so I worked in the industry as an engineer for a while. Now I’m also equally interested in the business of technology, and so later I got my MBA worked as a product manager.

And eventually I started writing a Substack newsletter called Chipstrat, where I sat at that intersection of both the technology and the business strategy and was essentially just analyzing the industry. And from there I [00:02:00] moved into analyst as a full-time job.

Brian: What’s the, what do you find the most rewarding? In that?

Austin: Yeah, that’s a really interesting question. What I love most about it is getting to talk to all the companies who are at the forefront of ai, semiconductors, edge ai, all the robotics, all the really interesting things that are happening right now, and go on an intellectual journey with them to understand.

How do they see the future, where they headed? And then of course, as an analyst to provide my 2 cents on, maybe some narrative that they could improve or things that they might not be thinking about. Historically we, if you tend to work for one company, then you’re an expert in your company, you get to think about that industry.

But what’s fun for me is to span many industries and many companies.

Brian: Yeah, absolutely. And because you’re an engineer, you can. You can call BS when you hear it, right? Totally,

Austin: totally. Yeah, and I think that actually gets a lot of respect. A lot of [00:03:00] executives in semiconductor companies were engineered by training, and so I think they can really appreciate someone who was an engineer.

They can come in and understand the technology and maybe push back, but of course, the marketing folks also appreciate someone who’s been like a product manager, thought about the business side of things, get to use both sides of my brain.

Brian: So when you were studying to be an engineer in Iowa and at Illinois, did you see yourself going down the design engineer career path, or even engineering marketing. What were you thinking then?

Austin: Yeah, hindsight’s 2020, but honestly at the time I really wasn’t thinking very deeply about career. I was just thinking about what’s the most fun and intellectually stimulating When I basically, I went to grad school at the University of Illinois because I was having so much fun in undergrad and at the time, nanoelectronics were super interesting.

Graphene was a material that was being explored. As a sort of posts silicon material. And I was just like, oh man, I gotta learn more and just how can I [00:04:00] stop learning? And that took me to grad school and actually in grad school I had some entrepreneurial experiences too, where I started a company with friends.

And so I didn’t know what I wanted to do other than. I liked engineering. I liked the idea of company building, and it was just one of those, I was taking

it year by year.

Brian: It’s fascinating how our careers progress. And 20 years from now, you’ll look back and you’ll go, how did I get from point A to point B?

But there will be a logical progression. Totally. All right. To the meat of the matter, chip plates. Expertise. Expertise? Is it a technology? Is it a marketplace? Is it something else? What do you think?

Austin: I would define chips as a technology and really also a design methodology. It’s how you decompose a system, which traditionally used to be a monolithic system on a chip, how you decompose it into individual components that could be made as individual [00:05:00] dye and then package them back up together.

So that’s why I say it’s both like a technology it’s designing and packaging. Composable little subsystems essentially. But then of course a design methodology because how you design for a chip based system is different. You have to think differently. It’s a lot of systems level thinking too, as far as marketplace now, there definitely will be a chip marketplace and I think that’s what has always gotten people really excited about the idea of chips.

And as we’ll talk about, I’m sure in this conversation, we really aren’t there yet, but you can definitely see a path to that future.

Brian: It’s fascinating, isn’t it? Because it’s become such a hot topic in just the last five to seven years, right? Why? Why do you think it’s caught fire?

Austin: I definitely think, the, there’s the economic angle, which is as chips are getting bigger and bigger, traditionally like a system on a chip and process technology, [00:06:00] the process nodes are getting smaller and smaller. The cost of designing those systems continues to increase. The bigger your chip is. The worst your yield is, for example.

On top of that, we’re we are starting to see just trends in the industry that demand as much compute as possible, as much memory as possible, especially with LLMs. So you’re already incentivized to try to make as big of chips as, or systems as possible and then to take it one. Step further, we’re actually getting beside like larger than the radical size, like the masks of the chip can actually be.

So we need to physically think about how do we expand beyond what we can make in a chip today. So we have all these pressures to both get bigger and bigger. Yet the cost is going up. And so then there’s this equal. Sort of pressure to say, is there a way to do this differently as we are starting to not see as much performance benefit for the cost increase as we move from process node to process [00:07:00] node and then especially as you get into the weeds, you realize, hey, not all of these sub components of this system need to be on that new process technology.

Logic scales as you make your transistor smaller and smaller. But things like memory or IO or analog don’t see the same economic benefits of moving to smaller and smaller transistor size. So you have all these, the industry trend needing more compute, more memory and yet things are getting more and more expensive.

Right there, there’s a lot of pressure to say, could we do this differently? What if we break these things apart so that, big companies are definitely thinking about that. If you’re a startup, an entrepreneur, and you’re coming in and trying to say, I have an idea on something novel that I could do here, how can I compete in this industry?

How can I differentiate? You are looking at the cost of designing a chip, going, hundreds of millions of dollars, and you’re asking yourself like. Can startups even exist in this space? Can they even afford to tape out a chip or do they need to literally go raise. A hundred million [00:08:00] dollars. And so startups see the idea of chips and they recognize, oh, this is really interesting.

What if I can focus on the novel intellectual property that I have? Maybe it’s spinning out of my PhD program or an idea. I had an industry that my company, it was tangential to what my company is doing. Could I focus on just the IP and then maybe actually make that into a chip lit? Other vendors or people could plug into their systems.

I think from a technology perspective, but definitely from an economic and an entrepreneurial perspective, there’s people coming at it from different angles, all seeing interest in this idea of chips and systems of chips.

So in, in a sense, necessity is the mother of invention.

Brian: But the thing that sort of boggles my mind is that the rise of chips address a particular scaling economic problem. As you point out and to get to the solution though, you have to change a lot about the design flow, and yet the industry [00:09:00]seems to have done that with ne a hiccup.

Austin: Yes.

And so you are right. It’s in a shift from monolithic to chips. It impacts everything from design to integration and packaging to testing. It took the big companies to go first like a MD, right? Because you really needed you. You can’t have an ecosystem bootstrap itself here. You really needed someone to go first that had the capital and the skills to co-design and make all those changes from end to end, right?

So you could show up as a startup and say, Hey, I have a chip. Who’s on the other end of that transaction, they’re going to say, okay, I buy your chip. Like, how do I put this into my system? How do I know that it communicates with my system? Will my EDA tools support it? And so on and so forth.

Brian: So Austin, you’ve written a lot about early chip adoption, right, to how we’ve gotten here today. You’ve talked about the large company embrace of the [00:10:00] technology. AMD, Intel. Others? What lessons did those implementations teach the industry so far?

Austin: So there’s actually a really good paper from AMD and a good podcast. There’s a paper from Sam Nafziger and his team at AMD.

It’s from 2021, and it actually walks through AMD’s process of why they went the chip route with their epic CPUs and their rise in CPUs and what the benefits were. I think there’s a nice podcast with Mark Papermaster and Sam as well discussing this. But ultimately, yeah, these big companies had to go first, but in doing so, they showed the industry one, of course, that it’s possible.

But two, one of the interesting things is they demonstrated how this enables. IP reuse and chip reuse. So before, if you were making a portfolio of CPUs, you might be able to reuse some of the soft ip, like some of the [00:11:00] RTL or the design across this portfolio of, we want some CPUs that have a ton of cores and a ton of memory, and others that are maybe lower end that have fewer cores, fewer memory.

They showed how with chips you can actually start to reuse the entire chip to create different SKUs. Even in if we fast forward a little bit to like the. AI accelerator space, which is all the rage. The last three years. AMD came out with the instinct 300 A and 300 x, which was showed this perfectly.

300 x was just eight GPU chips. You can think of them as AI accelerator chips, perfect for data center AI training and inference. And then the 300 a. Was like six of those GPU chips, but then they swapped a few out and put some CPU chips on instead, and built a different product. Very similar, but it was targeted for HPC workloads.

It was called an APU. It had this tight integration between the GPU and CPU on chip. But I share this to say [00:12:00]that. AMD has been showing the industry that you can design chips and then you can start to reuse them. And of course those same CPU chips that they use in that product now they use in their epic server CPUs, for example.

Not only did they show that it’s possible and that it’s economical, they saw all the yield benefits that you’d expect as you move to the smaller dye that you’re integrating. They showed the industry that they could bring the EDA partners together, their foundry partners do all the packaging and technically.

Feasibly make it off, but they even showed the benefits to their product portfolio and their business differentiation and actually got to reap the benefits of the reuse.

Brian: We talked about marketplace a little bit earlier. A lot of people would like to see this movement get to a true sort of plug and play marketplace open. Are there barriers to that right now?

Austin: Yeah, so if you start to think through a marketplace, Hey, I make a, a. Chip and I want to sell it to you, and[00:13:00] you want to integrate it with the rest of your system. As you think this through, you realize oh, there’s a lot of like interfaces and protocols and standards that need to be defined.

You need to trust that when you buy my chip, it speaks a particular, it communicates in a particular way and you know how to interface with that. For example, like my chip speaks English and you speak English, so you know that when you plug it in, it’s all going to talk together, right? And so to have a true marketplace, you.

You either have to do a bunch of bespoke integration. Oh, this is how this company’s chip talks. This is how that company’s chips talks. But that kind of kills all the benefits, or you need to have interoperability through standards, through a defined way to communicate way to partition the system and so on.

Brian: Arm’s, obviously very active in this space. Some of the things that the company’s enabled, the CHIP systems architecture, cf C-S-A-F-C-S-A, and other standards like amba Chai and UCIE. Talk a [00:14:00] little bit about how these frameworks and specific inter interconnect technologies fit into the chip picture.

Austin: These are exactly what we talked about.

They tried to. Take the idea of chips from internal and bring it externally to a potential future marketplace through the idea of standard. So I’ll start at the lowest layer you’ve got UCIE, universal Chip Interconnect Express. It’s like PCIE and this is how CHIPS talk to each other.

It’s like the physical layer defining how signals move between the dyes, the protocol and how the data’s transferred, and even the software model. How did these, how are these chips discovered when they’re in a system taking it a level of abstraction, higher Amba Chi. C two c lot, lots of acronyms here.

Amba is an old standard advanced microcontroller bus architecture. Chai is coherent hub interface, and C two C is chip to chip. This helps preserve [00:15:00] like coherent memory, semantic behavior across the chips and so basically you turn multiple dyes into a logical chip so they don’t act as separate systems that have to pass messages.

It defines how to put, bring these. Die together so that they act as one. And as an analogy, it’s actually a lot like scale up in today’s AI accelerator systems, letting XUS talk and access each other’s HBM as if they were one that’s essentially, this is like the chip clip version of that. Okay, so that’s great.

You’ve got chips that can talk and they can act as one system. Now the arm CS, a chip system architecture sits on top of that, a level of abstraction, even higher. And it really defines things like how to partition your system into chips. So you used to be making a single SOC monolithic dye all sitting together on the same piece of silicon all talking.

Now you’ve got this chip lit world that you’re trying to go to. You know how the dye are supposed to talk. You know how they can act as one system, but you still have to. Ask [00:16:00] that question, how do I partition the different features and functionality into chips? What is the, especially if you’re hoping to partition it into different components and maybe eventually buy some components off the shelf or license some of that IP from someone else you need to know how the industry expects your system to be partitioned.

And then you can think of this as the CSA lets vendors build systems or components independently and know that when they put them all together, even if it’s only within their own ip, they’ll get a coherent system.

Brian: Arm also is evolving or expanding from its. IP legacy into subsystems with the Neoverse compute subsystems, and also Arm Total Design as a framework to enable teams to implement chips faster, more efficiently.

As an analyst, how do you view that?

Austin: So I see there’s a very similar. Really value proposition [00:17:00] trying to be realized here. So with chips, what we talked about is essentially how can you decompose a system so that you can do it more cost effectively, and it opens up opportunities to not have to.

Design every component of the system yourself. You might be able to buy those from a startup or from a vendor or whatever. Neo verse CSS, our arms, neo verse compute subsystems is the same vein, which is just how do you speed up time to market? How do you let a company focus on what they do best?

Hey, instead of just licensing our CPU ip, why not get a CPU plus some of the other? Validated system components alongside of it. So interconnect IO management, memory management, whatever. And so I see this, I saw CSS as arm taking a step and helping customers say, Hey, I, we can help you with more of the design than just the CPU.

And that frees you up to spend more of your time on, on what you do best. And then arm total [00:18:00] design puts. Puts an ecosystem around that and again says, Hey, to get you to value as quickly as possible, we also have a bunch of partners that can come alongside you that we are have verified and we have a good relationship with them.

And they can help you take your CSS based system and get it all the way from, architectural design all the way to tape out. So you’ve got e, d, a, and. IP partners who can help you. You’ve got design service partners if you want them. Boundaries are integrated as part of arm total design, and then all the way downstream to like firmware software providers that can help you.

I think it of as a hierarchy, companies could come in as low as they want if they already know what they’re doing and they’ve got all the skillset and they can, they can start with just. The CSS subsystem themselves, where they could come in at highers of levels of distractions to say, Hey, I used to be like a system integrator, but now I actually want to build chips.

But help me with as much as possible, and I’ll just [00:19:00] focus on my little bespoke part that truly adds value over here.

Brian: So if I’m a C-level exec and I hear the phrase chip economy, how should I perceive that? What does it mean to me for say, cost innovation? Supply chain resilience, that sort of stuff.

Austin: So one of the things that a chip economy unlocks is the ability to decompose a system and let it innovate at the appro. Let each sub component innovate at the app appropriate pace. So maybe it in a monolithic world, if you’re taping out chips, every year you have to. Redesign everything every year, or take your IP and make it work for the next process node and quick.

Squeeze in whatever innovations you have. Now this starts to decouple it. Maybe your IO isn’t changing that much. Maybe your s ran isn’t changing that much, but so you could design those chips [00:20:00] or potentially, eventually buy them from someone else. Design those chips and maybe you don’t touch those for a few years until the system demands it again, right?

So maybe those are on a two year cadence or a three year cadence, but then maybe your NPU design, you want to update it every year as the world is changing and you realize you need different architectures. Instead of needing to redesign the entire system on a one year cadence, you can now start to redesign only the components that need to be.

Improved on a one year cadence, but the others might be able to be a two or three year cadence. On top of that, you can start to amortize your NRE non-recurring engineering, sort of your design costs. Again, if you are designing a chip and now you can reuse that chip. The hardened IP essentially across many different skews.

You’re thinking as a C-level executive, oh, that’s great. That’s better capital efficiency. If you can start to decouple these blocks and make individual dyes, you can start to [00:21:00] control the risk a little bit better. Earlier upstream, if one team is building this dye and they have issues, it’s not stopping everyone else.

You, you also have things up upstream where you could do known good dye testing and stuff so you can get information about are we on the right track? Are things working sooner? So there’s lots of just lowering risk, better capital efficiency, faster iteration or iteration at the right pace. And then you can have skew flexibility and you can potentially even have.

Late binding decisions that you make where you say we know a lot of the system are these dyes that we feel confident in. We’re not quite sure what 2027 or 2028 or 2029 look like for that block. Maybe we can wait a little bit before having to get started on the design.

Brian: Yeah. And that, it seems to me really important in an era when the workloads. That these technologies are being built for downstream are just so disparate and [00:22:00] constantly changing. You need that sort of flexibility.

Austin: Yep, absolutely.

Brian: Jeff Tate, founder of rambus, he was at AMD for a while, FlexLogic. He’s argued that chips are still a technology for the quote unquote big players. Do you see. Arm’s approach changing that dynamic and making chip technology more accessible to mid-level companies or even startups?

Austin: Yeah, I think what Arm is doing is putting all the things in place that are needed to make it possible to move beyond the big players. Without standards, without an ecosystem, without reference designs, it just wouldn’t be possible and it wasn’t possible.

Startups tried. And it just wasn’t possible. Now, Arm is putting into place the things that are needed for a startup or for a medium-sized company to experiment and to move toward chips.

Brian: What’s the biggest misconception people have about chips?

Austin: So I think the biggest [00:23:00] misconception, I alluded it to it earlier, is that things are magically simpler now.

Oh, I’ll just get it. I’ll just design my chip. I’ll get a couple chips off the shelf. I’ll plug ’em all together. I now, I’m, shipping the full enchilada and that felt pretty easy. While there’s some truth to it, now you have Lego blocks and you can put them together. There’s actually just a new level of complexity at the system integration level, and so I.

Now when you’re thinking about, let’s say, thermals hotspots, before when you owned that whole monolithic SOC, you had a pretty good sense of where the hotspots were gonna be, and you would lay out your chips or your blocks on the dye accordingly. Now. You have to do that still, but you have to do it at the system levels.

Okay, I’ve got these different chips and I’m integrating them. You still have to think about hotspots. You still have to think about thermals and oh, by the way, maybe that’s a chip some from someone else. So you’re gonna need to get information from them about their thermals, right? So you’re still [00:24:00] having to deal with thermal issues and hotspots.

It’s just at the system level. And I think, you can think about this shifting. And all sorts of things. Power en route and signals and noise, it’s, there’s all that complexity is still there. Now. It’s just at the systems level.

Brian: So you’ve made a really articulate case for the value that chips bring today and the value that they’re gonna bring for years to come.

We know that it’s bringing a lot of value to data centers. But in the next three to five years, what other industries do you think are gonna be able to benefit from the technology?

Austin: Three to five years is an interesting lens too, because I will say one other misconception is that on day one, in your first pass, you’ll reap all this economic value, but actually it will come over many generations as you get to reuse your chips.

And so if you start to think about. Industry trends, and then also what if you took a chip roadmap approach? One of the things that I am really interested to think [00:25:00] more deeply about is how chips could impact robotics, because now. Vision language models, the rise of transformer based models that can turn robots into something that can actually understand and interact with humans just through natural language, and then of course are better at visualizing and understanding what they see.

There’s excitement about bringing. Humanoid robotics or just more intelligent industrial robotics to many applications. And when you start to think about the system design for those, we’re talking about you need brains, you’ve got all sorts of sensors. You might need like a nervous system or sensors like in the hands and feet, and there’s lots of different.

Ways the compute could look. And I don’t know if the industry has decided oh yes, it’s just a, two big GPUs up here, and a bunch of little microcontrollers down there. So I think there’s opportunity to experiment and ask what is the right system architecture from a compute and memory perspective for [00:26:00] robotics.

And I think triplets right now is a good time for people to say, oh. What if we took a chip lit approach? How might that let us innovate, experiment more, and could that ultimately be the best route instead of just taking what exists today, which aren’t bad, take GPUs that exist today or take an SOC that exists today and try to repurpose it Could be interesting to clean sheet.

First principles ask if we took a chip approach, what are different portfolios that we could support as like a robotics platform vendor?

Brian: It’s interesting it chips seemed to me to be the pathway in the robotics context to standardize. Of certain types of robots to really unleash the software layers that will drive them and control them.

Going forward,

Austin: I think we’ll see a world where. Different robots can handle different workloads and do need different system requirements from a [00:27:00] compute and from a memory perspective. Some might run smaller models and do simpler things. Some might be really intense like compute, intense memory, intense, whatever.

Want, need crazy low latency given the environment that they work in. And yes. Instead of just a one size fits all approach, chip lists could be a way for someone to create a portfolio of SKUs for these different robots, for their different workloads, but maybe they don’t have the vendor there doesn’t have to design every single different permutation, but can take a chip lit approach and just reuse some of their building blocks.

Brian: So when we get back together in two years. Maybe it’s only gonna be a year. You never know. ’cause technology accelerates. What milestones do you think will point to as signs that the chip ecosystem is maturing?

Austin: Some sort of milestones or victories that would be nice to see would be examples of multi-vendor chip systems.

So not just [00:28:00] a single vendor. Of course, if you start to see a single vendor that’s more of a medium sized company, that’s interesting. But if we could start to see. A single vendor that’s actually building a system that uses chips from other vendors. That would be a big milestone, multi-vendor chip system.

I think in those examples, if we get information about how much bespoke integration work they had to do versus how much of that integration was reusable, I think the integrations more and more reusable where they say, Hey, here’s, three generations of this, and it took us a lot. On the first pass, but then we got to reuse a lot and still plug and play even a new version of a chip that, but we didn’t have to do a bunch of new bespoke work.

I think that’s a signal that the standards are working and that the ecosystem is healthy and on the right track. Of course, I think in two years maybe we’ll start to see companies who treat their chip as a product again or said another way they try to productize their [00:29:00] IP and chip form back to the executive opportunities and how they should be thinking about the chip economy.

It does open up new sort of pools of value, if you will, to say, Hey, this used to be ip, soft IP that we licensed. What if we turned that invested in turning that into a chip? So in two years, I think if we see companies marketing chips again or more than they have in the past, that’s a sign that they believe there will be customers on the other end, and that the ecosystem is healthy.

Finally, again if we see not only a first multi-vendor. Product from someone, but a second, that’s exciting to see okay, they made it work and now they’re starting to have a roadmap built on it. And then maybe we can start to get some data about did that speed them up or did it keep their margins higher, or so on and so forth.

Brian: Well, Austin, the old saying goes may you live in interesting times. We definitely live in interesting times indeed, and I really appreciate the time that you’ve spent with us to share your views on chips [00:30:00] and the broader electronics industry design. So we will be keeping an eye out to see what you’ll be writing about in the next 12 months or so.

So thank you very much.

Austin: Sounds good. Thanks for having me.