Microsoft, Intel, AMD, and other tech giants form universal chiplet standard

Intel Core i9-12900K
Intel Core i9-12900K (Image credit: Rich Edmonds / Windows Central)

What you need to know

  • Microsoft, Intel, AMD, Arm, and several other companies have established a universal chiplet standard.
  • The standard, known as UCIe 1.0, will help connect hardware and software without requiring the use of proprietary technologies.
  • One aim of the standard is to allow the mixing and matching of chiplet components from different companies.

Microsoft is just one of many major tech companies to help establish a universal chiplet standard. AMD, Advanced Semiconductor Engineer, Arm, Google Cloud, Intel, Meta, Microsoft, Qualcomm, Samsung, and Taiwan Semiconductor Manufacturing Company (TSMC) recently formed a consortium to create the standard known as UCIe 1.0 (via ZDNet). UCIe is short for Universal Chiplet Interconnect Express.

Unlike USB, PCIe, and NVMe, which all have universal standards, chiplets are not standardized. At the moment, companies use proprietary interconnects that can limit which components can be used together. In other words, chiplets and related components are not plug-and-play right now. Having a universal standard for chiplets should help companies and consumers.

Intel claims that the establishment of UCIe 1.0 will reduce costs and deliver better input and output performance, all while using less power.

"Integrating multiple chiplets in a package to deliver product innovation across market segments is the future of the semiconductor industry and a pillar of Intel's IDM 2.0 strategy," said Executive Vice President and General Manager at Intel Sandra Rivera.

The consortium has not outlined a specific timeframe for the standard to be widely implemented. A press release on the topic provides a general sense that the implementation could be sooner rather than later.

"Upon incorporation of the new UCIe industry organization this year, member companies will begin work on the next generation of UCIe technology, including defining the chiplet form factor, management, enhanced security, and other essential protocols," said the release.

Sean Endicott
News Writer and apps editor

Sean Endicott brings nearly a decade of experience covering Microsoft and Windows news to Windows Central. He joined our team in 2017 as an app reviewer and now heads up our day-to-day news coverage. If you have a news tip or an app to review, hit him up at (opens in new tab).

  • Wow. But what exactly is UCIe? I think it's just a technical framework for mixing chiplets together in a chip/SOC or maybe on a motheboard. This should facilitate more multi-company chips, like the prior Intel CPU with embedded AMD Radeon graphics. I suspect this could help merge/bridge mobile and PC systems (e.g., ARM chip with native x86 embedded hardware to resolve performance problems with emulation, or an x86 chip with ARM components for improved low-power features), if there's a market interest for that (I know many Windows Central readers dream of this due to the fall of Windows Phone, but not sure there's a real market need out there). Interesting, and maybe just temporary, but nVidia is not part of the group. Neither are any of the Chinese tech companies. It looks like they are working to ensure the standard works at both ends of the tech/cost spectrum, to support cost-optimizing very inexpensive combinations and low performance, and also ultra-high end combinations pulling together the best features of different chip technologies. At the protocol level, UCIe is trying to leverage the same approach used by PCIe for standardizing communication to PC add-on boards, with the added twist of supporting external connections, presumably for clustering systems into a supercomputer or for plug'n'play of diverse functions. Here are some additional links (as always, Anandetch goes into the most depth, with Tom's Hardware a close second, Forbes seems to give the best high-level summary):
  • The specification covers the physical layer, laying out the electrical signaling standards that chiplets will use to talk to each other, as well as the number of physical lanes and the supported
    The specification doesn’t cover, however, is the packaging/bridging technology used to provide the physical link between chiplets.
  • What a surprise! Apple is nowhere to be seen when it comes to compatibility, and interoperability of technology. We wouldn't want Mr Jobs to be rolling over in his grave would we?
  • Could this solve the current mess that is RGB control/programming? As it is, you'll generally need multiple apps to control/program your PC's RGB lighting and it also creates issues where games only integrate with selected RGB implementations which creates inconsistency in terms of game-reactive lighting.
  • I'd say rgb lightning is and maybe should always be the last think they have in mind with this move.
  • Hard disagree. Unification would help game devs integrate game-reactive lighting without having to account for 9001 different implementations or, as is most commonly the case, only focusing on one or two implementations. Would also help minimise bloat from having to run multiple RGB control apps (indeed, since MS is on board with this chiplet standard, they could actually integrate RGB control/programming into Windows itself without the need for third party apps should this chiplet standard be extended in that direction).
  • PalZer0, this is about modular chip and chipset design, to meld designs prior to chip manufacture. It's not that a standard for RGB lighting wouldn't be a good thing, it's just that's a much higher-level issue. For lighting control, that would need something like a common programming model for the lights -- same codes and sequence of command to set colors, fade rates, etc. That's all quite different from how to physically combine diverse chip technologies inside a larger chip. Frankly, the lighting issue is a much simpler problem to solve compared to building a viable chiplet model -- far, far fewer variables and different technologies to unify with lighting.
  • Simplest explanation: chiplets modularize chips. Where traditionally a CPU, GPU, SOC, whatever has been a single monolithical sliver of silicon, chiplets allow the close integration of different slivers into a single package that (mostly) functions like a single chip. It is a way to offset yield issues with some component functions without having to scrap the entire die. It also allows continued use of known good units with newer/different units. Mix-n-match to allow more varied products. As an example: a chiplet based console SOC family might package a common CPU chiplet with different GPU chiplets (or a varying number of the same one) to produce SOCs for a Series S, Series X, etc, without having to resort to different wafer masks for each SOC, instead the masks would focus on the different chiplets. For servers, as another example, a processor family could be assembled by combining different numbers of CPU chiplets with a variety of specialized processors and instead of integrating the units at the motherboard or single chip level they integrate at a middle level, not as tight as a single silicon die but tighter than at a mobo level. Good stuff.