Future Tech

AMD crams five compute architectures onto a single board

Tan KW
Publish date: Thu, 08 Feb 2024, 05:10 PM
Tan KW
0 460,455
Future Tech

With the launch of its Embedded+ architecture yesterday, AMD effectively posed the question: Why choose one compute architecture when you can have five?

The House of Zen's latest offering pairs a x64 Ryzen processor with a Versal AI Edge system-on-chip via PCIe so that they can be used on a single board in low-power, low-latency data processing applications at, say, the network edge.

The main processor can be picked from the Ryzen Embedded R2000 family, which was launched in 2022 and has up to four Zen+ CPU cores, 16 lanes of PCIe 3.0, and up to eight Radeon Vega graphics compute units.

That chip has a dedicated PCIe link to an AMD Versal Adaptive SoC, the first of which showed up in 2021. These Versal parts pack a complement of AI engines, an FPGA, and four Arm-designed CPU cores - two Cortex-A72 and two Cortex-R5. In terms of ML processing, AMD claims its top Versal chips are capable of pushing around 228 TOPS at INT8.

As the name Embedded+ indicates, this kind of stuff is supposed to be used in devices that are built to last in relatively tough conditions - public displays, instrumentation and machinery out in the field, network edge processing, transport and automotive, and so on. It doesn't have to be cutting edge and super powerful; reliability, cost, power-versus-performance efficiency, footprint, and specific workload validation are often more important. Using older architectures for these chips is therefore expected.

Indeed, AMD has its sights set specifically on industrial robotics, retail and surveillance security, smart city gear, networking, machine vision, and medical imaging; its customers will decide whether the hardware has the latency, oomph, and processing pipelines for their applications.

"In automated systems, sensor data has diminishing value with time and must operate on the freshest information possible to enable the lowest latency deterministic response. In industrial and medical applications, many decisions need to happen in milliseconds," Chetan Khona, AMD's senior director of industrial vision, healthcare, and science markets, gushed in a statement.

In order to hit these latency targets, AMD encourages developers to break up their workloads into smaller parts that can be individually accelerated by the platform's various compute architectures. For example, the Adaptive SoC's FPGA and AI engines could be used to pre-process and classify streaming data from multiple sensors or feeds, while the Ryzen processor's CPU and GPU cores run the control systems and graphical user interface.

Of course, that happens all the time in mixed-core systems, and AMD isn't the first to put a mix of architectures on one board or even in a single chip. That much is obvious. What's interesting here is that AMD is doing so with not only Ryzen and Versal families but also a strong emphasis on AI at the embedded and network edge end, which it wouldn't do if people didn't want it. Ideally.

Among the first systems based on AMD's Embedded+ design is Sapphire's ever so creatively named Edge+ VPR-4616-MB. This connects a quad-core Ryzen Embedded R2314 processor to a Versal AI Edge VE2302 Adaptive SoC on a mini-ITX-sized board that reportedly consumes as little as 30 watts. Sapphire also plans to offer the motherboard as a fully assembled computer with memory, storage, PSU, and chassis. ®

 

https://www.theregister.com//2024/02/07/amd_compute_systems/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment