Skip links

Microsoft expands Copilot bug bounty targets, adds payouts for even moderate messes

Microsoft is so concerned about security in its Copilot products for folks that it’s lifted bug bounty payments for moderate-severity vulnerabilities from nothing to a maximum of $5,000, and expanded the range of vulnerabilities it will pay people to find and report.

The payouts for less severe vulns were introduced because the software giant thinks “even moderate vulnerabilities can have significant implications for the security and reliability of our Copilot consumer products,” explained Microsoft bounty team members Lynn Miyashita and Madeline Eckert earlier this month.

Under the Copilot Bounty Program, researchers who identify and disclose previously unknown vulnerabilities can earn between $250 and $30,000. As is typical in bug bounty programs, higher payouts are reserved for those who report the most serious vulnerabilities, such as code injection or model manipulation.

Microsoft classifies security flaws into four severity levels – Critical, Important, Moderate, and Low – based on the Microsoft Vulnerability Severity Classification for AI Systems and the Microsoft Vulnerability Severity Classification for Online Services.

Redmond also recently expanded the Copilot (AI) Bounty Program to cover 14 types of vulnerability, up from three, an understandable decision given its push to embed the generative AI assistants across its product portfolio.

The three old-school vulns are inference manipulation, model manipulation and inferential information disclosure.

The new vuln types Microsoft wants bug hunters to find are deserialization of untrusted data, code injection, authentication issues, SQL or command injection, server-side request forgery, improper access control, cross-site scripting, cross-site request forgery, web security misconfiguration, cross origin access issues, and improper input validation.

Crucially, Microsoft’s asked AI bug hunters to start probing the following services:

Redmond launched its first AI bug bounty program in October 2023 for Bing’s AI-powered features, then extended it to Copilot in April 2024.

In addition to the revised bug bounty rewards and targets, Microsoft last year announced new training for “aspiring AI professionals” under its Zero Day Quest, which now includes workshops, access to Microsoft AI engineers, and research and development tools.

Microsoft’s latest security efforts come as it and almost every other major tech company races to pack generative AI into their stuff, sometimes without fully addressing – or understanding – the security and privacy risks involved as was the case with Windows Recall.

Time and again, researchers have found ways to jailbreak the large language models (LLMs) that underpin services like Copilot, in ways that some worry could see criminals potentially use AI to develop weapons or carry out cyberattacks.

It’s also possible to manipulate LLMs by intentionally introducing misleading data into the training datasets. These types of data poisoning attacks can cause models to generate incorrect or harmful outputs, again with potential real-world consequences if the models are used in healthcare.

It’s doubtful that the software vendors will slow the introduction of AI into their products. Maybe bigger bug bounties will motivate an army of hunters to find the worst flaws before miscreants exploit these weaknesses. ®

Source