Future Tech

TSMC boss says one-trillion transistor GPU is possible by early 2030s

Tan KW
Publish date: Tue, 02 Apr 2024, 07:52 AM
Tan KW
0 460,014
Future Tech

3D chiplets will be the key to building the world's first one-trillion transistor GPU, says TSMC chairman Mark Liu and chief scientist H.-S. Philip Wong.

The semiconductor industry is always going to want to cram more transistors into processors, but as Liu and Wong outline in a IEEE Spectrum report, AI has made this demand even more insatiable. Established companies and startups are snapping up as many GPUs as they can for the use of running AI workloads, and naturally chips with higher performance density are in high demand.

Liu and Wong argue that today's 100-billion transistor GPUs just won't cut it: one trillion transistors in a single GPU will be needed. The two argue that this trillion-transistor chip can come about as early as 2034, a decade from today.

While newer nodes with increased transistor density will be an important part in getting to one trillion transistors, it's not enough on its own. Instead, Liu and Wong say 3D chiplets, the cutting-edge technology of connecting several chips together beside and on top of each other, are going to be crucial for achieving one trillion transistors.

"The continuation of the trend of increasing transistor count will require multiple chips, interconnected with 2.5D or 3D integration, to perform the computation," the article says. "We are now putting together many chips into a tightly integrated, massively interconnected system."

The argument for requiring both chiplets and 3D chip stacking for the world's first one-trillion transistor GPU is pretty simple. The largest size for a single chip depends on the reticule limit of the node used for manufacturing, and today the largest reticule limits cap out at around 800mm2. Not only is producing such a processor expensive, it's not even big enough to hit a trillion transistors any time soon.

Not only can chiplets break beyond the reticule limit (and they already have on many cutting-edge processors), but the approach can also reduce manufacturing costs, especially if individual chiplets are on the smaller side.

Stacking chips on top of each other, 3D stacking, is another necessary component according to Liu and Wong; there's only so much room on a substrate, and when that room runs out horizontally, going up is the only option left. Their discussion of 3D stacking largely focuses on high bandwidth memory (HBM) chips, which are the memory of choice for the latest datacenter GPUs.

However, 3D chip stacking isn't just for increasing memory density. For instance, AMD's 3D V-Cache technology can place a 64MB slice of L3 cache on top of its chiplet-based CPUs in the Ryzen and Epyc lineups. There's also no reason why a company couldn't stack a processing chip on top of another processing chip if it was necessary, though it might prove problematic in respect to power and heat.

The idea of hitting one trillion transistors on a single GPU in a decade isn't out of the range of possibility. Following the historical trend of transistor count increases since 2008 per a graph TSMC made, it takes around eight to 10 years for transistor count to increase tenfold. In 2008, the limit was one billion, and in 2016 the 10-billion transistor barrier was broken. Then the 100-billion transistor barrier was passed a few days ago with Nvidia's Blackwell GPUs.

By the way, if you're at all familiar with Nvidia's datacenter products, you might be confused that TSMC says the limit is at 100 billion transistors, as Nvidia's Grace-Hopper GH200 is said to have over 200 billion transistors. TSMC seems to not count Nvidia's CPU-GPU combo devices, probably because they're not on the same package.

While TSMC is talking up a trillion transistors by 2034, Intel CEO Pat Gelsinger says he can do it by 2030, just six years from now. Like TSMC, Gelsinger says the key is in 3D stacking, but he clearly thinks Intel can do it better and faster than TSMC can. The Intel CEO also discusses transistor-level improvements like Ribbon FETs and backside power delivery, something that's conspicuously absent from Liu and Wong's essay.

Whether the first trillion-transistor GPU (or CPU) comes in 2030 or 2034, it's clear that multi-chip designs with 3D stacking will be the path forward. The latest nodes just aren't boosting density like they used to, to the point where 3D stacking seems to be merely offsetting this decline rather than increasing the pace of innovation. ®

 

https://www.theregister.com//2024/04/01/tsmc_one_trillion_transistor/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment