Future Tech

Amazon isn’t sold on AMD’s tiny Zen 4c cores in manycore Bergamo processors

Tan KW
Publish date: Thu, 15 Jun 2023, 10:03 AM
Tan KW
0 462,167
Future Tech

Amazon Web Services may be happy to deploy AMD's 96-core Epyc processors in its datacenters, and even announced an instance based on the chips this week, but the cloud giant isn't so sure about the 128-core Bergamo parts the Zen designer revealed this week.

"We don't chase the core count as much with AWS," David Brown, VP of AWS Elastic Compute Cloud, told The Register.

The comment breaks with conventional wisdom, which says that cloud operators appreciate higher core counts because they allow more virtual machines and containers in a single server, and therefore increase the earning potential of each box in a rack.

This is the core concept on which Ampere Computing built its business. Back in 2020 the company launched an Arm-compatible datacenter processor with up to 80 cores optimized for cloud native workloads and integer performance. The processors quickly drew the attention of the major cloud providers, including Oracle, Microsoft, Google, Tencent, Alibaba, and Baidu to name just a few. The chip designer later launched a 128-core variant in 2021 and this spring revealed a 192-core part.

AMD tried to do something similar with the launch of its Bergamo Epyc processors, which can be had with up to 128 of Zen 4c x86 cores, and up to 256 per node in a two-socket server. That density comes with a small penalty of lower clock speed.

At least for now, AWS, the world's largest public cloud provider, isn't interested.

"The thing you have to think about is what else do you have to put in the server," Brown said. "Servers are designed with a certain amount of memory per core. With higher core counts, a lot of the other parts of the server get very, very expensive, and with the move to DDR5 we see even more challenges right now."

With higher core counts, a lot of the other parts of the server get very, very expensive

According to Brown, AWS prefers to standardize around the CPU, whether it be Intel's, AMD's, or its own own homegrown Graviton silicon, which boasts specs that look a lot like Ampere's. Doing so means the cloud colossus can focus on tweaking the server to its task.

"When you see general purpose, high performance, memory optimized [instances] it's really the same chip across all of those," Brown said. "All that's changing across those three is the amount of memory that you get per CPU."

By changing out the number or capacity of the DIMMs, AWS can tune the memory bandwidth and capacity to align with customers' expectations.

Brown also noted that core counts haven been creeping up in general purpose chips. Five years ago, he said, AWS was buying 24 and 32 core parts. Today it's buying AMD's general purpose Epyc silicon with 96 cores. While those processors pack fewer cores than Bergamo, they're also better-specced in terms of clock speed or cache per core.

With that said, Brown emphasized that this philosophy isn't set in stone. "I don't think we have a religious belief that says we don't like those other ones," he said.

While AWS may not be sold on AMD's latest Epycs, the cloud provider is clearly in the minority, at least as far as the concept of a core-optimized CPUs are concerned.

As to whether Bergamo will help AMD replicate successes Ampere has had with its Altra series remains to be seen. AMD has been rather quiet about which cloud providers plan to deploy its latest-gen silicon, although such reticence is not unusual as it takes time for operators to assess parts before putting them to work. At least one hyperscaler, Meta, plans to deploy Bergamo alongside AMD's Genoa to bolster the throughput of its services. ®

 

https://www.theregister.com//2023/06/15/amd_aws_bergamo/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment