Future Tech

Bug bounty hunters load up to stalk AI and fancy bagging big bucks

Tan KW
Publish date: Fri, 27 Oct 2023, 04:26 PM
Tan KW
0 461,328
Future Tech

Google has expanded its bug bounty program to include its AI products, and will pay ethical hackers to find both conventional infosec flaws and bad bot behaviour.

The Chocolate Factory wants bug hunters to poke around for five categories of attacks.

These include techniques like prompt injection, in which an attacker uses adversarial prompts to manipulate the output of the large language models such that it will override prior instructions and do something completely different.

Also on the list is training data extraction - essentially reconstructing training data to leak sensitive information - and other model manipulation attacks that either backdoor systems or provide poisoned training data to change the model's behavior.

Google will also pay rewards for adversarial perturbation attacks in which an attacker provides inputs to trigger a misclassification in a security control, and finally good old fashioned data theft - being specific to confidential or proprietary model-training data, in this case.

Google may also pay for finding other flaws in its AI products if the bug meets the qualifications listed on its vulnerability rewards program page.

"It is important to note that reward amounts are dependent on severity of the attack scenario and the type of target affected," engineers Eduardo Vela, Jan Keller, and Ryan Rinaldi wrote, directing would-be bug hunters to this reward table.

The AI-specific attacks Google has added to its programs were chosen based on findings of an internal AI red team that the ad biz formed a couple years ago.

"You have a whole new set of TTPs [tactics, techniques and procedures] that adversaries can use when they are targeting systems that are built on machine learning," Daniel Fabian, head of Google Red Teams, told The Register during an interview ahead of this August's Hacker Summer Camp in Las Vegas.

53% call GenAI tools 'major target'

Google's newest bug bounty comes as HackerOne's latest annual report finds more than half (55 percent) of the ethical hackers in its community say generative-AI tools will become a "major target" for them in the near future, and 61 percent plan to use and develop tools that use AI to actually find vulnerabilities.

"Hackers are curious people, they want to understand the cutting edge," Alex Rice, founder and chief technology officer at HackerOne, told The Register.

"It's less about trying to develop a niche, and more about this new technology, there's going to be security problems here, that's my purpose in life is to go find security problems" Rice said. "Hackers flock to emerging technologies of all sorts and that's definitely the case with AI as well."

The bug bounty as-a-service platform is already seeing some of its vulnerability hunters specializing in things like prompt injection, detecting bias, and polluting training data, Rice said.

With the latter, "it's almost indirect to AI," he said, explaining that access to training data used to be a lot harder to access. "Suddenly, it's a little bit more accessible because these models need access to it, there's a team of data scientists that have access to it, so that's opening up a new category of risks that we've seen hackers start to specialize in."

HackerOne’s report, based on a survey completed by 2,384 ethical hackers active on the platform in the last month, also found 62 percent of respondents said they plan to specialize in the OWASP Top 10 for Large Language Models.

"I think we're going to see quite a different level of specialization in generative AI," Rice said. "You're not going to see people referred to as a generative AI specialist, you're going to have people specializing in particular areas within the ecosystem." ®

 

https://www.theregister.com//2023/10/27/google_ai_bounty_hackerone/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment