Future Tech

Lawyers say US cybersecurity law too ambiguous to protect AI security researchers

Tan KW
Publish date: Fri, 09 Aug 2024, 08:02 AM
Tan KW
0 464,409
Future Tech

 Existing US laws tackling those illegally breaking computer systems don't accommodate modern large language models (LLMs) and can open researchers up to prosecution for what ought to be sanctioned security testing, say a trio of Harvard scholars. 

The Computer Fraud and Abuse Act (CFAA) doesn't even really apply to prompt injection attacks as written and interpreted by legal precedent, Harvard Berkman Klein Center for Internet and Society affiliates Ram Shankar Siva Kumar, Kendra Albert and Jonathon Penney explained at Black Hat this week.

What that means, says Albert, a lawyer and instructor at Harvard Law School's cyber law clinic, is that it's hard to tell where prompt injection and other exploits of LLMs cross into the realm of illegality. 

"Ram, John and I were having coffee last September, and [the two of them] were talking about whether prompt injection violates the CFAA," Albert told The Register. Albert isn't sure prompt injection does violate the CFAA, but the more the trio dug into this, the more they found uncertainty.

"There's a set of stuff that we're pretty sure violates the CFAA, which is stuff where you're not allowed to access the machine learning model at all," Albert said. "Where it gets interesting is where someone has permission to access a generative AI system or LLM, but is doing things with it that the people who created it would not want them to." 

The CFAA may have been able to protect AI researchers a few years ago, but that all changed in 2021 when the US Supreme Court issued its decision in Van Buren v United States. That decision effectively narrowed the CFAA by saying the Act only applies to someone who obtained information from areas of a computer (eg, files, folders or databases) which their account wasn't given official access to.

That may be well and good when we're talking about clearly defined computer systems with different areas restricted via user controls and the like, but neither Van Buren nor the CFAA map well onto LLM AIs, the Berkman Klein affiliates said.

"[The US Supreme Court] was sort of thinking about normal file structures, where you have access to this, or you don't have access to that," Albert said. "That doesn't really work well for machine learning systems more generally, and especially when you're using natural language to give a prompt to an LLM that then returns some output."

Illegal access, as far as the CFAA and SCOTUS is concerned, needs to involve some degree of breaking a barrier into a system, which Albert said isn't as clear-cut when a security researcher, penetration tester, red team member or just a kid on ChatGPT messing around with prompts, manages to break an AI's guard rails.

Even were one to argue that getting an AI to spill the contents of its database could be characterized as unauthorized entry under the CFAA, that's not clear either, said Siva Kumar, an adversarial machine learning researcher.

"There's a very probabilistic element [to AIs]," Siva Kumar told us. "Databases don't generate, they retrieve."

Trying to retrofit a 2021 legal decision onto LLMs, even after a few brief years, can't be done cleanly, he added. "We knew about [LLMs], but the Supreme Court didn't, and nobody anticipated this."

"[There's been] stunningly little attention paid to the legal ramifications of red teaming AI systems, compared to the volume of work around the legal implications of copyright," Siva Kumar added. "I still don't know even after working with Kendra and John - top legal scholars - if I'm covered doing a specific attack." 

Albert said it's unlikely Congress will make changes to existing laws to account for prompt injection testing against AIs, and that the issue will likely end up being litigated in court to better define the difference between legitimate-but-exploitative and plainly malicious AI prompts. 

In the meantime, Albert's worried that CFAA ambiguity and overly-litigious AI companies might chase away anyone acting in good faith, leaving undiscovered security vulnerabilities ripe for exploitation. 

"The real risks of these kinds of legal regimes is great," Albert said. "You manage to discourage all of the people who would responsibly disclose [a vulnerability], and not deter any of the bad actors who would actually use these for harm." 

So, how should security researchers act in this time of AI ambiguity?

"Get a lawyer," Albert suggested. ®

 

https://www.theregister.com//2024/08/08/lawyers_say_us_cybersecurity_law/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment